Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-21527

Use buffer limit in order to take advantage of JAVA NIO Util's buffercache

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 2.2.0
    • 2.3.0
    • Spark Core
    • None

    Description

      Right now, ChunkedByteBuffer#writeFully do not slice bytes first.We observe code in java nio Util below:

      public static ByteBuffer More ...getTemporaryDirectBuffer(int size) {
              BufferCache cache = bufferCache.get();
              ByteBuffer buf = cache.get(size);
              if (buf != null) {
                  return buf;
              } else {
                  // No suitable buffer in the cache so we need to allocate a new
                  // one. To avoid the cache growing then we remove the first
                  // buffer from the cache and free it.
                  if (!cache.isEmpty()) {
                      buf = cache.removeFirst();
                      free(buf);
                  }
                  return ByteBuffer.allocateDirect(size);
              }
          }
      

      If we slice first with a fixed size, we can use buffer cache and only need to allocate at the first write call.
      Since we allocate new buffer, we can not control the free time of this buffer.This once cause memory issue in our production cluster.

      Attachments

        Issue Links

          Activity

            People

              cane zhoukang
              cane zhoukang
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: