Uploaded image for project: 'Hadoop YARN'
  1. Hadoop YARN
  2. YARN-2928 YARN Timeline Service v.2: alpha 1
  3. YARN-3595

Performance optimization using connection cache of Phoenix timeline writer

    XMLWordPrintableJSON

Details

    • Sub-task
    • Status: Resolved
    • Major
    • Resolution: Later
    • None
    • 2.9.0
    • timelineserver
    • None

    Description

      The story about the connection cache in Phoenix timeline storage is a little bit long. In YARN-3033 we planned to have shared writer layer for all collectors in the same collector manager. In this way we can better reuse the same heavy-weight storage layer connection, therefore it's more friendly to conventional storage layer connections which are typically heavy-weight.

      Phoenix, on the other hand, implements its own connection interface layer to be light-weight, thread-unsafe. To make these connections work with our "multiple collector, single writer" model, we're adding a thread indexed connection cache. However, many performance critical factors are yet to be tested.

      In this JIRA we're tracing performance optimization efforts using this connection cache. Previously we had a draft, but there was one implementation challenge on cache evictions: There may be races between Guava cache's removal listener calls (which close the connection) and normal references to the connection. We need to carefully define the way they synchronize.

      Performance-wise, at the very beginning stage we may need to understand:

      1. If the current, thread-based indexing is an appropriate approach, or we can use some better ways to index the connections.
      2. the best size of the cache, presumably as the proposed default value of a configuration.
      3. how long we need to preserve a connection in the cache.

      Please feel free to add this list.

      Attachments

        Issue Links

          Activity

            People

              gtcarrera9 Li Lu
              gtcarrera9 Li Lu
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: