Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-33005 Kubernetes GA Preparation
  3. SPARK-32661

Spark executors on K8S do not request extra memory for off-heap allocations

Attach filesAttach ScreenshotVotersWatch issueWatchersLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Sub-task
    • Status: Closed
    • Minor
    • Resolution: Duplicate
    • 3.0.0, 3.0.1, 3.1.0
    • None
    • Kubernetes
    • None

    Description

      Off-heap memory allocations are configured using `spark.memory.offHeap.enabled=true` and `conf spark.memory.offHeap.size=<size>`. Spark on YARN adds the off-heap memory size to the executor container resources. Spark on Kubernetes does not request the allocation of the off-heap memory. Currently, this can be worked around by using spark.executor.memoryOverhead to reserve memory for off-heap allocations. This proposes make Spark on Kubernetes behave as in the case of YARN, that is adding spark.memory.offHeap.size to the memory request for executor containers.

      Attachments

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            Unassigned Unassigned
            lucacanali Luca Canali
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment