Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-30448

accelerator aware scheduling enforce cores as limiting resource

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 3.0.0
    • 3.0.0
    • Spark Core
    • None

    Description

      For the first version of accelerator aware scheduling(SPARK-27495), the SPIP had a condition that we can support dynamic allocation because we were going to have a strict requirement that we don't waste any resources. This means that the number of number of slots each executor has could be calculated from the number of cores and task cpus just as is done today.

      Somewhere along the line of development we relaxed that and only warn when we are wasting resources. This breaks the dynamic allocation logic if the limiting resource is no longer the cores.  This means we will request less executors then we really need to run everything.

      We have to enforce that cores is always the limiting resource so we should throw if its not.

      I guess we could only make this a requirement with dynamic allocation on, but to make the behavior consistent I would say we just require it across the board.

      Attachments

        Issue Links

          Activity

            People

              tgraves Thomas Graves
              tgraves Thomas Graves
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: