Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-28958

pyspark.ml function parity

Attach filesAttach ScreenshotVotersWatch issueWatchersCreate sub-taskLinkCloneUpdate Comment AuthorReplace String in CommentUpdate Comment VisibilityDelete Comments
    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Major
    • Resolution: Resolved
    • 3.1.0
    • None
    • ML, PySpark
    • None

    Description

      I looked into the hierarchy of both py and scala sides, and found that they are quite different, which damage the parity and make the codebase hard to maintain.

      The main inconvenience is that most models in pyspark do not support any param getters and setters.

      In the py side, I think we need to do:

      1, remove setters generated by _shared_params_code_gen.py;

      2, add common abstract classes like the side side, such as JavaPredictor/JavaClassificationModel/JavaProbabilisticClassifier;

      3, for each alg, add its param trait, such as LinearSVCParams;

      4, since sharedParam do not have setters, we need to add them in right places;

      Unfortunately, I notice that if we do 1 (remove setters generated by _shared_params_code_gen.py), all algs (classification/regression/clustering/features/fpm/recommendation) need to be modified in one batch.

      The scala side also need some small improvements, but I think they can be leave alone at first

      Attachments

        Activity

          This comment will be Viewable by All Users Viewable by All Users
          Cancel

          People

            Unassigned Unassigned
            podongfeng Ruifeng Zheng
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Slack

                Issue deployment