Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-32658

Partition length number overflow in `PartitionWriterStream`

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Blocker
    • Resolution: Fixed
    • 3.0.0
    • 3.0.1, 3.1.0
    • Spark Core
    • None

    Description

      A Spark user reported `FetchFailedException: Stream is corrupted` error when they upgraded their workload to 3.0. The issue happens when the shuffle output data size from a single task is very large (~5GB). The issue is introduced by https://github.com/apache/spark/commit/abef84a868e9e15f346eea315bbab0ec8ac8e389 , the `PartitionWriterStream` defined the partition length to be an int value, while it should be a long value.

      Attachments

        Activity

          People

            jiangxb1987 Xingbo Jiang
            jiangxb1987 Xingbo Jiang
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: