Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-27256

If the configuration is used to set the number of bytes, we'd better use `bytesConf`'.

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Minor
    • Resolution: Fixed
    • 3.0.0
    • 3.0.0
    • Spark Core, SQL
    • None

    Description

      Currently, if we want to configure `spark. sql. files. maxPartitionBytes` to 256 megabytes, we must set  `spark. sql. files. maxPartitionBytes=268435456`, which is very unfriendly to users.

      And if we set it like this:`spark. sql. files. maxPartitionBytes=256M`, we will  encounter this exception:

      Exception in thread "main" java.lang.IllegalArgumentException: spark.sql.files.maxPartitionBytes should be long, but was 256M
              at org.apache.spark.internal.config.ConfigHelpers$.toNumber(ConfigBuilder.scala:34)

      Attachments

        Issue Links

          Activity

            People

              10110346 liuxian
              10110346 liuxian
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: