Details
-
Improvement
-
Status: Closed
-
Major
-
Resolution: Fixed
-
1.0.2
-
None
Description
A common complaint with Spark in a multi-tenant environment is that applications have a fixed allocation that doesn't grow and shrink with their resource needs. We're blocked on YARN-1197 for dynamically changing the resources within executors, but we can still allocate and discard whole executors.
It would be useful to have some heuristics that
- Request more executors when many pending tasks are building up
- Discard executors when they are idle
See the latest design doc for more information.
Attachments
Attachments
Issue Links
- is depended upon by
-
SPARK-3145 Hive on Spark umbrella
- Resolved
- is related to
-
HIVE-7768 Integrate with Spark executor scaling [Spark Branch]
- Resolved
-
SPARK-5349 Spark standalone should support dynamic resource scaling
- Closed
-
YARN-1197 Support changing resources of an allocated container
- Open
- relates to
-
SPARK-4922 Support dynamic allocation for coarse-grained Mesos
- Closed
-
SPARK-4403 Elastic allocation(spark.dynamicAllocation.enabled) results in task never being execued.
- Closed
-
SPARK-4751 Support dynamic allocation for standalone mode
- Closed
-
SPARK-20624 SPIP: Add better handling for node shutdown
- In Progress