Details
-
New Feature
-
Status: Resolved
-
P2
-
Resolution: Fixed
-
None
-
None
Description
Currently, the Spark runner translates batch pipelines into RDD code, meaning it doesn't benefit from the optimizations DataFrames (which isn't type-safe) enjoys.
With Datasets, batch pipelines will benefit the optimizations, adding to that that Datasets are type-safe and encoder-based they seem like a much better fit for the Beam model.
Looking ahead, Datasets is a good choice since it's the basis for the future of Spark streaming as well (Structured Streaming) so this will hopefully lay a solid foundation for a native integration between Spark 2.0 and Beam.
Attachments
Issue Links
- duplicates
-
BEAM-8470 Create a new Spark runner based on Spark Structured streaming framework
- Triage Needed