Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-29692

SparkContext.defaultParallism should reflect resource limits when resource limits are set

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Open
    • Major
    • Resolution: Unresolved
    • 3.0.0
    • None
    • Spark Core
    • None

    Description

      With the new gpu/fpga support in spark, defaultParallelism may not be computed correctly. Specifically defaultParaallelism may be much higher than the total possible concurrent tasks if workers have many more cores than gpus for example.

      Steps to reproduce:
      Start a cluster with spark.executor.resource.gpu.amount < cores per executor. Set spark.task.resource.gpu.amount = 1. Keep cores per task as 1.

      Attachments

        Activity

          People

            Unassigned Unassigned
            bago.amirbekian Bago Amirbekian
            Votes:
            1 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated: