Details
-
Improvement
-
Status: In Progress
-
Minor
-
Resolution: Unresolved
-
3.1.0
-
None
Description
When a PipelineModel is saved/loaded, all the stages are saved/loaded sequentially. When dealing with a PipelineModel with many stages, although each stage's save/load takes sub-second, the total time taken for the PipelineModel could be several minutes. It should be trivial to parallelize the save/load of stages in the SharedReadWrite object.
To reproduce:
import org.apache.spark.ml._ import org.apache.spark.ml.feature.VectorAssembler val outputPath = "..." val stages = (1 to 100) map { i => new VectorAssembler().setInputCols(Array("input")).setOutputCol("o" + i)} val p = new Pipeline().setStages(stages.toArray) val data = Seq(1, 1, 1) toDF "input" val pm = p.fit(data) pm.save(outputPath)