Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-38194

Make memory overhead factor configurable

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 3.4.0
    • 3.3.0, 3.4.0
    • Kubernetes, Mesos, YARN
    • None

    Description

      Currently if the memory overhead is not provided for a Yarn job, it defaults to 10% of the respective driver/executor memory. This 10% is hard-coded and the only way to increase memory overhead is to set the exact memory overhead. We have seen more than 10% memory being used, and it would be helpful to be able to set the default overhead factor so that the overhead doesn't need to be pre-calculated for any driver/executor memory size. 

      Attachments

        Activity

          People

            kimahriman Adam Binford
            kimahriman Adam Binford
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: