Details
-
Bug
-
Status: Open
-
Major
-
Resolution: Unresolved
-
3.0.0
-
None
-
None
Description
With the new gpu/fpga support in spark, defaultParallelism may not be computed correctly. Specifically defaultParaallelism may be much higher than the total possible concurrent tasks if workers have many more cores than gpus for example.
Steps to reproduce:
Start a cluster with spark.executor.resource.gpu.amount < cores per executor. Set spark.task.resource.gpu.amount = 1. Keep cores per task as 1.