Uploaded image for project: 'HBase'
  1. HBase
  2. HBASE-8755

A new write thread model for HLog to improve the overall HBase write throughput

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Closed
    • Critical
    • Resolution: Fixed
    • None
    • 0.98.0, 0.99.0
    • Performance, wal
    • None
    • Reviewed
    • Redo of thread model writing edits to the WAL; slower when few clients but as concurrency rises, it makes for better throughput.

    Description

      In current write model, each write handler thread (executing put()) will individually go through a full 'append (hlog local buffer) => HLog writer append (write to hdfs) => HLog writer sync (sync hdfs)' cycle for each write, which incurs heavy race condition on updateLock and flushLock.

      The only optimization where checking if current syncTillHere > txid in expectation for other thread help write/sync its own txid to hdfs and omitting the write/sync actually help much less than expectation.

      Three of my colleagues(Ye Hangjun / Wu Zesheng / Zhang Peng) at Xiaomi proposed a new write thread model for writing hdfs sequence file and the prototype implementation shows a 4X improvement for throughput (from 17000 to 70000+).

      I apply this new write thread model in HLog and the performance test in our test cluster shows about 3X throughput improvement (from 12150 to 31520 for 1 RS, from 22000 to 70000 for 5 RS), the 1 RS write throughput (1K row-size) even beats the one of BigTable (Precolator published in 2011 says Bigtable's write throughput then is 31002). I can provide the detailed performance test results if anyone is interested.

      The change for new write thread model is as below:
      1> All put handler threads append the edits to HLog's local pending buffer; (it notifies AsyncWriter thread that there is new edits in local buffer)
      2> All put handler threads wait in HLog.syncer() function for underlying threads to finish the sync that contains its txid;
      3> An single AsyncWriter thread is responsible for retrieve all the buffered edits in HLog's local pending buffer and write to the hdfs (hlog.writer.append); (it notifies AsyncFlusher thread that there is new writes to hdfs that needs a sync)
      4> An single AsyncFlusher thread is responsible for issuing a sync to hdfs to persist the writes by AsyncWriter; (it notifies the AsyncNotifier thread that sync watermark increases)
      5> An single AsyncNotifier thread is responsible for notifying all pending put handler threads which are waiting in the HLog.syncer() function
      6> No LogSyncer thread any more (since there is always AsyncWriter/AsyncFlusher threads do the same job it does)

      Attachments

        1. 8755-syncer.patch
          16 kB
          Himanshu Vashishtha
        2. 8755trunkV2.txt
          25 kB
          Michael Stack
        3. 8755v8.txt
          29 kB
          Michael Stack
        4. 8755v9.txt
          28 kB
          Michael Stack
        5. HBASE-8755-0.94-V0.patch
          26 kB
          Honghua Feng
        6. HBASE-8755-0.94-V1.patch
          27 kB
          Honghua Feng
        7. HBASE-8755-0.96-v0.patch
          26 kB
          Honghua Feng
        8. HBASE-8755-trunk-V0.patch
          26 kB
          Honghua Feng
        9. HBASE-8755-trunk-V1.patch
          25 kB
          Honghua Feng
        10. HBASE-8755-trunk-v4.patch
          26 kB
          Honghua Feng
        11. HBASE-8755-trunk-v6.patch
          26 kB
          Honghua Feng
        12. HBASE-8755-trunk-v7.patch
          28 kB
          Honghua Feng
        13. HBASE-8755-v5.patch
          27 kB
          Himanshu Vashishtha
        14. thread.out
          434 kB
          Michael Stack

        Issue Links

          There are no Sub-Tasks for this issue.

          Activity

            People

              fenghh Honghua Feng
              fenghh Honghua Feng
              Votes:
              1 Vote for this issue
              Watchers:
              49 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: