Uploaded image for project: 'Hadoop HDFS'
  1. Hadoop HDFS
  2. HDFS-17342

Fix DataNode may invalidates normal block causing missing block

    XMLWordPrintableJSON

Details

    • Reviewed

    Description

      When users read an append file, occasional exceptions may occur, such as org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: xxx.

      This can happen if one thread is reading the block while writer thread is finalizing it simultaneously.

      Root cause:

      1. The reader thread obtains a RBW replica from VolumeMap, such as: blk_xxx_xxx[RBW] and the data file should be in /XXX/rbw/blk_xxx.
      2. Simultaneously, the writer thread will finalize this block, moving it from the RBW directory to the FINALIZE directory. the data file is move from /XXX/rbw/block_xxx to /XXX/finalize/block_xxx.
      3. The reader thread attempts to open this data input stream but encounters a FileNotFoundException because the data file /XXX/rbw/blk_xxx or meta file /XXX/rbw/blk_xxx_xxx doesn't exist at this moment.
      4. The reader thread will treats this block as corrupt, removes the replica from the volume map, and the DataNode reports the deleted block to the NameNode.
      5. The NameNode removes this replica for the block.
      6. If the current file replication is 1, this file will cause a missing block issue until this DataNode executes the DirectoryScanner again.

      As described above, when the reader thread encountered FileNotFoundException is as expected, because the file is moved.
      So we need to add a double check to the invalidateMissingBlock logic to verify whether the data file or meta file exists to avoid similar cases.

      Attachments

        Issue Links

          Activity

            People

              haiyang Hu Haiyang Hu
              haiyang Hu Haiyang Hu
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: