Details
-
Improvement
-
Status: Resolved
-
Major
-
Resolution: Duplicate
-
3.0.0
-
None
-
None
Description
Spark has a setting spark.driver.maxResultSize, see https://spark.apache.org/docs/latest/configuration.html#application-properties :
Limit of total size of serialized results of all partitions for each Spark action (e.g. collect) in bytes. Should be at least 1M, or 0 for unlimited. Jobs will be aborted if the total size is above this limit. Having a high limit may cause out-of-memory errors in driver (depends on spark.driver.memory and memory overhead of objects in JVM). Setting a proper limit can protect the driver from out-of-memory errors.
This setting can be very useful in constraining the memory that the spark driver needs for a specific spark action. However, this limit is checked before decompressing data in https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala#L662
Even if the compressed data is below the limit the uncompressed data can still be far above. In order to protect the driver we should also impose a limit on the uncompressed data. We could do this in https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/SparkPlan.scala#L344
I propose adding a new config option spark.driver.maxUncompressedResultSize.
A simple repro of this with spark shell:
> printf 'a%.0s' {1..100000} > test.csv # create a 100 MB file > ./bin/spark-shell --conf "spark.driver.maxResultSize=10000" scala> val df = spark.read.format("csv").load("/Users/dvogelbacher/test.csv") df: org.apache.spark.sql.DataFrame = [_c0: string] scala> val results = df.collect() results: Array[org.apache.spark.sql.Row] = Array([aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa... scala> results(0).getString(0).size res0: Int = 100000
Even though we set maxResultSize to 10 MB, we collect a result that is 100MB uncompressed.
Attachments
Issue Links
- duplicates
-
SPARK-28613 Spark SQL action collect just judge size of compressed RDD's size, not accurate enough
- Resolved