Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-22284

Code of class \"org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection\" grows beyond 64 KB

Attach filesAttach ScreenshotVotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 2.1.0
    • 2.2.1, 2.3.0
    • Optimizer, PySpark, SQL
    • None

    Description

      I am using pySpark 2.1.0 in a production environment, and trying to join two DataFrames, one of which is very large and has complex nested structures.

      Basically, I load both DataFrames and cache them.
      Then, in the large DataFrame, I extract 3 nested values and save them as direct columns.
      Finally, I join on these three columns with the smaller DataFrame.
      This would be a short code for this:

      dataFrame.read......cache()
      dataFrameSmall.read.......cache()
      dataFrame = dataFrame.selectExpr(['*','nested.Value1 AS Value1','nested.Value2 AS Value2','nested.Value3 AS Value3'])
      dataFrame = dataFrame.dropDuplicates().join(dataFrameSmall, ['Value1','Value2',Value3'])
      dataFrame.count()
      

      And this is the error I get when it gets to the count():

      org.apache.spark.SparkException: Job aborted due to stage failure: Task 11 in stage 7.0 failed 4 times, most recent failure: Lost task 11.3 in stage 7.0 (TID 11234, somehost.com, executor 10): java.util.concurrent.ExecutionException: java.lang.Exception: failed to compile: org.codehaus.janino.JaninoRuntimeException: Code of method \"apply_1$(Lorg/apache/spark/sql/catalyst/expressions/GeneratedClass$SpecificUnsafeProjection;Lorg/apache/spark/sql/catalyst/InternalRow;)V\" of class \"org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection\" grows beyond 64 KB
      

      I have seen many tickets with similar issues here, but no proper solution. Most of the fixes are until Spark 2.1.0 so I don't know if running it on Spark 2.2.0 would fix it. In any case I cannot change the version of Spark since it is in production.
      I have also tried setting

      spark.sql.codegen.wholeStage=false
      

      but still the same error.

      The job worked well up to now, also with large datasets, but apparently this batch got larger, and that is the only thing that changed. Is there any workaround for this?

      Attachments

        Issue Links

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            kiszk Kazuaki Ishizaki
            someonehere15 Ben
            Votes:
            0 Vote for this issue
            Watchers:
            7 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment