Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-3080

ArrayIndexOutOfBoundsException in ALS for Large datasets

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Closed
    • Major
    • Resolution: Cannot Reproduce
    • 1.1.0, 1.2.0
    • None
    • MLlib
    • None

    Description

      The stack trace is below:

      java.lang.ArrayIndexOutOfBoundsException: 2716
      org.apache.spark.mllib.recommendation.ALS$$anonfun$org$apache$spark$mllib$recommendation$ALS$$updateBlock$1.apply$mcVI$sp(ALS.scala:543)
      scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
      org.apache.spark.mllib.recommendation.ALS.org$apache$spark$mllib$recommendation$ALS$$updateBlock(ALS.scala:537)
      org.apache.spark.mllib.recommendation.ALS$$anonfun$org$apache$spark$mllib$recommendation$ALS$$updateFeatures$2.apply(ALS.scala:505)
      org.apache.spark.mllib.recommendation.ALS$$anonfun$org$apache$spark$mllib$recommendation$ALS$$updateFeatures$2.apply(ALS.scala:504)
      org.apache.spark.rdd.MappedValuesRDD$$anonfun$compute$1.apply(MappedValuesRDD.scala:31)
      org.apache.spark.rdd.MappedValuesRDD$$anonfun$compute$1.apply(MappedValuesRDD.scala:31)
      scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
      scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
      org.apache.spark.util.collection.ExternalAppendOnlyMap.insertAll(ExternalAppendOnlyMap.scala:138)
      org.apache.spark.rdd.CoGroupedRDD$$anonfun$compute$5.apply(CoGroupedRDD.scala:159)
      org.apache.spark.rdd.CoGroupedRDD$$anonfun$compute$5.apply(CoGroupedRDD.scala:158)
      scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772)
      scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
      scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
      scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771)
      org.apache.spark.rdd.CoGroupedRDD.compute(CoGroupedRDD.scala:158)
      org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
      org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
      org.apache.spark.rdd.MappedValuesRDD.compute(MappedValuesRDD.scala:31)
      org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
      org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
      org.apache.spark.rdd.FlatMappedValuesRDD.compute(FlatMappedValuesRDD.scala:31)
      org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
      org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
      org.apache.spark.rdd.FlatMappedRDD.compute(FlatMappedRDD.scala:33)
      org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
      org.apache.spark.rdd.RDD.iterator(RDD.scala:229)

      This happened after the dataset was sub-sampled.
      Dataset properties: ~12B ratings
      Setup: 55 r3.8xlarge ec2 instances

      Attachments

        Issue Links

          Activity

            People

              mengxr Xiangrui Meng
              brkyvz Burak Yavuz
              Votes:
              1 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: