Details
-
Bug
-
Status: Open
-
P3
-
Resolution: Unresolved
-
2.16.0
-
None
-
None
Description
It was reported in the mailing list that the Spark runner is not respecting the user defined Spark default parallelism configuration. We should investigate and if it is the case ensure that a user defined configuration is always respected. Runner optimizations should apply only for default (unconfigured) values otherwise we will confuse users and limit them from parametrizing Spark for their best convenience.
Attachments
Issue Links
- is related to
-
BEAM-8191 Multiple Flatten.pCollections() transforms generate an overwhelming number of tasks
- Resolved