Details
-
Improvement
-
Status: Resolved
-
Minor
-
Resolution: Fixed
-
3.0.0
-
None
Description
Currently, if we want to configure `spark. sql. files. maxPartitionBytes` to 256 megabytes, we must set `spark. sql. files. maxPartitionBytes=268435456`, which is very unfriendly to users.
And if we set it like this:`spark. sql. files. maxPartitionBytes=256M`, we will encounter this exception:
Exception in thread "main" java.lang.IllegalArgumentException: spark.sql.files.maxPartitionBytes should be long, but was 256M
at org.apache.spark.internal.config.ConfigHelpers$.toNumber(ConfigBuilder.scala:34)
Attachments
Issue Links
- links to