Details
-
Question
-
Status: Resolved
-
Major
-
Resolution: Invalid
-
2.0.0
-
None
-
None
Description
Hi, all:
When I run a training job in spark with yarn-client, and set executor-cores=20(less than vcores=24) and executor-num=4(my cluster has 4 slaves), then there will be always one node computing time is larger than others.
I checked some blogs, and they says executor-cores should be set less than 5 if there are tons of concurrency threads. I tried to set executor-cores=4, and executor-num=20, then it worked.
But I don't know why, can you give some explain? Thank you very much.