Uploaded image for project: 'HBase'
  1. HBase
  2. HBASE-20761

FSReaderImpl#readBlockDataInternal can fail to switch to HDFS checksums in some edge cases

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Invalid
    • None
    • None
    • HFile
    • None

    Description

      One of our users reported this problem on HBase 1.2 before and after HBASE-11625:

      Caused by: java.io.IOException: On-disk size without header provided is 131131, but block header contains 0. Block offset: 2073954793, data starts with: \x00\x00\x00\x00\x00\x00\x00\x0\
      0\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00
              at org.apache.hadoop.hbase.io.hfile.HFileBlock.validateOnDiskSizeWithoutHeader(HFileBlock.java:526)
              at org.apache.hadoop.hbase.io.hfile.HFileBlock.access$700(HFileBlock.java:92)
              at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1699)
              at org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1542)
              at org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:445)
              at org.apache.hadoop.hbase.util.CompoundBloomFilter.contains(CompoundBloomFilter.java:100)
      

      The problems occurs when we do a read a block without HDFS checksums enabled and due some data corruption we end with an empty headerBuf while trying to read the block before the HDFS checksum failover code. This will cause further attempts to read the block to fail since we will still retry the corrupt replica instead of reporting the corrupt replica and trying a different one.

      Attachments

        Activity

          People

            Unassigned Unassigned
            esteban Esteban Gutierrez
            Votes:
            0 Vote for this issue
            Watchers:
            8 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: