Uploaded image for project: 'HBase'
  1. HBase
  2. HBASE-27683

Should support single call queue mode for RPC handlers while separating by request type

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Open
    • Major
    • Resolution: Unresolved
    • 2.5.3
    • None
    • Performance, rpc
    • None

    Description

      Currently we not only seperate call queues by request type, e.g. read, write, scan, but also distinguish queues for handlers by the config `hbase.ipc.server.callqueue.handler.factor`, whose description is as follows,

      Factor to determine the number of call queues.
        A value of 0 means a single queue shared between all the handlers.
        A value of 1 means that each handler has its own queue. 

      But I think what we want is not only one queue for all the requests, or each handler has its own queue. We also want each request type has its own queue.

      Distinguishing queues in the same type of requests will make some handlers too iddle but some handlers too busy under current balanced/random RPC executor framework. For the extrem case, each handler has its own queue, then if a large request comes for a handler, duing to he executor dispath calls without considering the queue size or the state of the handler, the afterwards coming requests will be queued until the handler complete the large slow request. While other handlers may process small requests quickly, but they can not help or grab calls from the busy queue, they must stay and wait it own queue jobs coming. Then we can see the queue time of some requests are long but there are iddle handlers.

      We can also see these circumstances, that the queue time of calls is too larger than the process time, sometimes twice or more. Restarting the slow RS will make these problems disappear. 

      By using single call queue for each request type, we can fully use the handler resources.

      Attachments

        Issue Links

          Activity

            People

              Xiaolin Ha Xiaolin Ha
              Xiaolin Ha Xiaolin Ha
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated: