Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-1133

Add a new small files input for MLlib, which will return an RDD[(fileName, content)]

    XMLWordPrintableJSON

Details

    • Improvement
    • Status: Resolved
    • Minor
    • Resolution: Fixed
    • 1.0.0
    • 1.0.0
    • Input/Output

    Description

      As I am moving forward to write a LDA (Latent Dirichlet Allocation) implementation to Spark MLlib, I find that a small files input API is useful, so I write a smallTextFiles() to support it.

      smallTextFiles() digests a directory of text files, then return an RDD[(String, String)], the former String is the file name, while the latter one is the contents of the text file.

      smallTextFiles() can be used for local disk I/O, or HDFS I/O, just like the textFiles() in SparkContext. In the scenario of LDA, there are 2 common uses:

      1. smallTextFiles() is used to preprocess local disk files, i.e. combine those files into a huge one, then transfer it onto HDFS to do further process, such as LDA clustering.

      2. It is also used to transfer the raw directory of small files onto HDFS (though it is not recommended, because it will cost too many namenode entries), then clustering it directly with LDA.

      Attachments

        Activity

          People

            yinxusen Xusen Yin
            xusen Xusen Yin
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: