Uploaded image for project: 'HBase'
  1. HBase
  2. HBASE-27233

Read blocks into off-heap if caching is disabled for read

    XMLWordPrintableJSON

Details

    • Hide
      Using Scan.setCacheBlocks(false) with on-heap LRUBlockCache will now result in significantly less heap allocations for those scans if hbase.server.allocator.pool.enabled is enabled. Previously all allocations went to on-heap if LRUBlockCache was used, but now it will go to the off-heap pool if cache blocks is enabled.
      Show
      Using Scan.setCacheBlocks(false) with on-heap LRUBlockCache will now result in significantly less heap allocations for those scans if hbase.server.allocator.pool.enabled is enabled. Previously all allocations went to on-heap if LRUBlockCache was used, but now it will go to the off-heap pool if cache blocks is enabled.

    Description

      Currently we decide whether a disk read shouldUseHeap based on three criteria:

      1. If block cache is disabled, return false
      2. If block cache is anything other than CombinedBlockCache, return true
      3. Otherwise return false for DATA blocks and true for other blocks

      The assumption here is we're making the decision based on which cache the block is likely to end up in. But if the read has caching disabled (i.e. setCacheBlocks(false)), it won't get into any cache. So we should return false in that case too.

      The only caller to shouldUseHeap in HFileReaderImpl has a boolean cacheBlock available which determines if the read block should attempt to be cached. We can pass that boolean into the function. We should probably also account for cacheConf.shouldCacheBlockOnRead for the same reason.

       

      Attachments

        Issue Links

          Activity

            People

              bbeaudreault Bryan Beaudreault
              bbeaudreault Bryan Beaudreault
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: