Uploaded image for project: 'HBase'
  1. HBase
  2. HBASE-26849

NPE caused by WAL Compression and Replication

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Critical
    • Resolution: Won't Fix
    • 1.7.1, 3.0.0-alpha-2, 2.4.11
    • None
    • Replication, wal
    • None

    Description

      My cluster uses HBase 1.4.12, opened WAL compression and replication.

      I could found replication sizeOfLogQueue backlog, and after some debugs, found the NPE throwed by https://github.com/apache/hbase/blob/branch-1/hbase-common/src/main/java/org/apache/hadoop/hbase/io/util/LRUDictionary.java#L109:

       

      The root cause for this problem is:
      WALEntryStream#checkAllBytesParsed:

      resetReader does not create a new reader, the original CompressionContext and the dict in it will still be retained.
      However, at this time, the position is reset to 0, which means that the HLog needs to be read from the beginning, but the cache that has not been cleared is still used, so there will be problems: the same data has already in the LRUCache, and it will be directly added to the cache again.
      Recreate a new reader here, the problem is solved.

      I will open a PR later. But, there are some other places in the current code to resetReader or seekOnFs. I guess these codes doesn't take into account the wal compression case at all...

       

      In theory, as long as the file is read again, the LRUCache should also be rolled back, otherwise there will be inconsistent behavior of READ and WRITE links.
      But the position can be roll back to any intermediate position at will, but LRUCache can't...

      Attachments

        1. image-2022-03-16-14-30-15-247.png
          187 kB
          tianhang tang
        2. image-2022-03-16-14-25-49-276.png
          691 kB
          tianhang tang

        Issue Links

          Activity

            People

              tangtianhang tianhang tang
              tangtianhang tianhang tang
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: