Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-26881

Scaling issue with Gramian computation for RowMatrix: too many results sent to driver

    XMLWordPrintableJSON

    Details

    • Type: Improvement
    • Status: Resolved
    • Priority: Minor
    • Resolution: Fixed
    • Affects Version/s: 2.2.0
    • Fix Version/s: 3.0.0
    • Component/s: MLlib
    • Labels:
      None

      Description

      This issue hit me when running PCA on large dataset (~1Billion rows, ~30k columns).

      Computing Gramian of a big RowMatrix allows to reproduce the issue.

       

      The problem arises in the treeAggregate phase of the gramian matrix computation: results sent to driver are enormous.

      A potential solution to this could be to replace the hard coded depth (2) of the tree aggregation by a heuristic computed based on the number of partitions, driver max result size, and memory size of the dense vectors that are being aggregated, cf below for more detail:

      (nb_partitions)^(1/depth) * dense_vector_size <= driver_max_result_size

      I have a potential fix ready (currently testing it at scale), but I'd like to hear the community opinion about such a fix to know if it's worth investing my time into a clean pull request.

       

      Note that I only faced this issue with spark 2.2 but I suspect it affects later versions aswell. 

       

        Attachments

          Issue Links

            Activity

              People

              • Assignee:
                gagafunctor Rafael RENAUDIN-AVINO
                Reporter:
                gagafunctor Rafael RENAUDIN-AVINO
              • Votes:
                0 Vote for this issue
                Watchers:
                2 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: