Uploaded image for project: 'Commons Net'
  1. Commons Net
  2. NET-709

IMAP Memory considerations with large ‘FETCH’ sizes.

    XMLWordPrintableJSON

Details

    • Important

    Description

      IMAP Memory considerations with large ‘FETCH’ sizes.

       

      The following comments concern classes in the org.apache.common.net.imap package.

       

      Consider the following imap ‘fetch’ exchange between a client (>) and server (<):

      > A654 FETCH 1:2 (BODY[TEXT])

      < * 1 FETCH (BODY[TEXT] {80000000}\r\n

      < * 2 FETCH …

      < A654 OK FETCH completed

       

      The first untagged response (* 1 FETCH …) contains a literal {80000000} or 80MB.

       

      After reviewing the source, it is my understanding, the entire 80MB sequence of data will be read into Java memory even when using  ‘IMAPChunkListener’. According the the documentation: 

       

      Implement this interface and register it via IMAP.setChunkListener(IMAPChunkListener) in order to get access to multi-line partial command responses. Useful when processing large FETCH responses.

       

      It is apparent the partial fetch response is read in full (80MB) before invoking the ‘IMAPChunkListener’ and then discarding the read lines (freeing up memory).

       

      Back to the example:

      > A654 FETCH 1:2 (BODY[TEXT])

      < * 1 FETCH (BODY[TEXT] {80000000}\r\n

      …. <— read in full into memory then discarded after calling IMAPChunkListener

      < * 2 FETCH (BODY[TEXT] {250}\r\n

      …. <— read in full into memory then discarded after calling IMAPChunkListener

      < A654 OK FETCH completed

       

      Above, you can see the chunk listener is good for each individual partial fetch response but does not prevent a large partial from being loaded into memory.

       

      Let’s review the code:

       

       296                 int literalCount = IMAPReply.literalCount(line);

      Above counts the size of the literal, in our case 80000000 or 80MB (for the first partial fetch response).

       

       

       297                 final boolean isMultiLine = literalCount >= 0;

       298                 while (literalCount >= 0) {

       299                     line=_reader.readLine();

       300                     if (line == null)  

      {                                  throw new EOFException("Connection closed without indication.");   }

       303                     replyLines.add(line);

       304                     literalCount -= line.length() + 2; // Allow for CRLF

       305                 }

      The literal count above starts at 80000000 and is decremented until reaching a negative non zero value where 80MB is read in full and while loop returns.

       

       306                 if (isMultiLine) {

       307                     final IMAPChunkListener il = chunkListener;

       308                     if (il != null) {

       309                         final boolean clear = il.chunkReceived(this);

       310                         if (clear) {

       311                             fireReplyReceived(IMAPReply.PARTIAL, getReplyString());

       312                             replyLines.clear();

      Now, after all 80MB is loaded into memory in full, invoke the IMAPChunkListener and throw away the lines freeing memory.

       

       313                         }

       314                     }

       315                 }

       

      I’m considering modifying the getReply() method, shown above, to chunk the partial responses breaking up the literal so that it’s not loaded into memory in full. This is to prevent the entire 80MB literal value from being loaded into memory. 

       

      This would be configurable as not to break the existing users of the API. Something like .setBreakLargeLiteralSize(true), when breakUpLargeLiteralSize is true, a maxLiteralBuffer value is used to chunk the literal preventing all 80MB from being loaded in full, instead loading chunks of it. This would require implementations of IMAPChunkListener to handle this behavior if it was turned on. The default behavior will see this chunking disabled as to not break the existing users. Essentially an opt-in feature reducing the risk.

       

      What are you thoughts or concerns with this? Do you agree?

      Attachments

        Activity

          People

            Unassigned Unassigned
            anderj16 Anders
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:

              Time Tracking

                Estimated:
                Original Estimate - 96h
                96h
                Remaining:
                Remaining Estimate - 96h
                96h
                Logged:
                Time Spent - Not Specified
                Not Specified