Details
-
New Feature
-
Status: Resolved
-
Major
-
Resolution: Resolved
-
2.4.4
-
None
-
Description
In one of our production streaming jobs that has more than 1k executors, and each has 20 cores, Spark spends significant portion of time (30s) in sending out the `ShuffeStatus`. We find there are two issues.
- In driver's message loop, it's calling `serializedMapStatus` which is in sync block. When the job scales really big, it can cause the contention.
- When the job is big, the `MapStatus` is huge as well, the serialization time and compression time is slow.
This work aims to address the first problem.
Attachments
Issue Links
- links to