Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-36936

spark-hadoop-cloud broken on release and only published via 3rd party repositories

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Open
    • Major
    • Resolution: Unresolved
    • 3.1.1, 3.1.2
    • None
    • Input/Output
    • None

    Description

      The spark docmentation suggests using `spark-hadoop-cloud` to read / write from S3 in https://spark.apache.org/docs/latest/cloud-integration.html . However artifacts are currently published via only 3rd party resolvers in https://mvnrepository.com/artifact/org.apache.spark/spark-hadoop-cloud including Cloudera and Palantir.

       

      Then apache spark documentation is providing a 3rd party solution for object stores including S3. Furthermore, if you follow the instructions and include one of the 3rd party jars IE the Cloudera jar with the spark 3.1.2 release and try to access object store, the following exception is returned.

       

      ```

      Exception in thread "main" java.lang.NoSuchMethodError: 'void com.google.common.base.Preconditions.checkArgument(boolean, java.lang.String, java.lang.Object, java.lang.Object)'
      at org.apache.hadoop.fs.s3a.S3AUtils.lookupPassword(S3AUtils.java:894)
      at org.apache.hadoop.fs.s3a.S3AUtils.lookupPassword(S3AUtils.java:870)
      at org.apache.hadoop.fs.s3a.S3AUtils.getEncryptionAlgorithm(S3AUtils.java:1605)
      at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:363)
      at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303)
      at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
      at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352)
      at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
      at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
      at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
      at org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:46)
      at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:377)
      at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:325)
      at org.apache.spark.sql.DataFrameReader.$anonfun$load$3(DataFrameReader.scala:307)
      at scala.Option.getOrElse(Option.scala:189)
      at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:307)
      at org.apache.spark.sql.DataFrameReader.json(DataFrameReader.scala:519)
      at org.apache.spark.sql.DataFrameReader.json(DataFrameReader.scala:428)

      ```

      It looks like there are classpath conflicts using the cloudera published `spark-hadoop-cloud` with spark 3.1.2, again contradicting the documentation.

      Then the documented `spark-hadoop-cloud` approach to using object stores is poorly supported only by 3rd party repositories and not by the released apache spark whose documentation refers to it.

      Perhaps one day apache spark will provide tested software so that developers can quickly and easily access cloud object stores using the documentation.

      Attachments

        Activity

          People

            Unassigned Unassigned
            colin.williams Colin Williams
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated: