Uploaded image for project: 'Hadoop Common'
  1. Hadoop Common
  2. HADOOP-15620 Über-jira: S3A phase VI: Hadoop 3.3 features
  3. HADOOP-16360

S3A NullPointerException: null uri host. This can be caused by unencoded / in the password string

    XMLWordPrintableJSON

Details

    • Sub-task
    • Status: Resolved
    • Minor
    • Resolution: Won't Fix
    • 3.0.3
    • None
    • fs/s3
    • None
    • Non AWS store

    Description

      I am experiencing very old issue appearing now again on Cloudera cluster 6.2. I use following libraries with pyspark job:

      • /opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/hadoop/hadoop-common-3.0.0-cdh6.2.0.jar
      • /opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/hadoop/hadoop-aws-3.0.0-cdh6.2.0.jar

      While trying to write DF to S3 as CSV I get following error:

      java.lang.NullPointerException: null uri host. This can be caused by unencoded / in the password string
      	at java.util.Objects.requireNonNull(Objects.java:228)
      	at org.apache.hadoop.fs.s3native.S3xLoginHelper.buildFSURI(S3xLoginHelper.java:69)
      	at org.apache.hadoop.fs.s3a.S3AFileSystem.setUri(S3AFileSystem.java:467)
      	at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:234)
      	at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3288)
      	at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:123)
      	at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3337)
      	at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3305)
      	at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:476)
      	at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
      	at org.apache.spark.sql.execution.datasources.DataSource.planForWritingFileFormat(DataSource.scala:423)
      	at org.apache.spark.sql.execution.datasources.DataSource.planForWriting(DataSource.scala:523)
      	at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:281)
      	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:270)
      	at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:228)
      	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
      	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      	at java.lang.reflect.Method.invoke(Method.java:498)
      	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
      	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
      	at py4j.Gateway.invoke(Gateway.java:282)
      	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
      	at py4j.commands.CallCommand.execute(CallCommand.java:79)
      	at py4j.GatewayConnection.run(GatewayConnection.java:238)
      	at java.lang.Thread.run(Thread.java:748)
      // code placeholder
      

      I specify secret key via configuration, not via path (as older bugs reported) and on top of that my Secret key doesn't have any slash, but access key has dash '-' character and 

      AWS_HOST_BASE I define with 'http://host.domain.suffix/' form

      My code doesn't use secret key in s3 path, but as follows:

      sparkSession = SparkSession.builder.getOrCreate() 
      sparkContext = sparkSession.sparkContext 
      #sparkContext._jsc.hadoopConfiguration().set("fs.s3a.multipart.size", "1000000") 
      sparkContext._jsc.hadoopConfiguration().set("fs.s3a.access.key", AWS_ACCESS_KEY_ID) sparkContext._jsc.hadoopConfiguration().set("fs.s3a.secret.key", AWS_SECRET_ACCESS_KEY) sparkContext._jsc.hadoopConfiguration().set("fs.s3a.endpoint", AWS_HOST_BASE) sparkContext._jsc.hadoopConfiguration().set("fs.s3.access.key", AWS_ACCESS_KEY_ID) sparkContext._jsc.hadoopConfiguration().set("fs.s3.secret.key", AWS_SECRET_ACCESS_KEY) sparkContext._jsc.hadoopConfiguration().set("fs.s3.endpoint", AWS_HOST_BASE) sparkContext._jsc.hadoopConfiguration().set("fs.s3n.access.key", AWS_ACCESS_KEY_ID) sparkContext._jsc.hadoopConfiguration().set("fs.s3n.secret.key", AWS_SECRET_ACCESS_KEY) sparkContext._jsc.hadoopConfiguration().set("fs.s3n.endpoint", AWS_HOST_BASE) 
      
      sqlContext = SQLContext(sparkSession.sparkContext) 
      # log4j = sparkContext._jvm.org.apache.log4j 
      # pylint: disable=W0212
      
       logger = sparkContext._jvm.org.apache.log4j.LogManager.getLogger("OracleToS3") 
      # logger = log4j.LogManager.getlogger(__name__) 
      sparkContext.setLogLevel('INFO') 
      logger.info("Going to process Oracle tables...") 
      
      for table in Source.table_list:
          logger.info("Reading oracle table into dataframe")
          oracle_table = sparkContext.read \
              .format("jdbc") \
              .option("url", Source.jdbc_string) \
              .option("dbtable", table) \
              .option("user", Source.user) \
              .option("password", Source.password) \
              .option("driver", "oracle.jdbc.driver.OracleDriver") \
              .load()
      
          # Display schema
          logger.info("Display table schema")
          oracle_table.show()
          logger.info("Display table top 5")
          oracle_table.head(5)
          output_file = "s3a://201906/" + "11/" + table + "_" + time.strftime("%Y%m%d_%H%M%S") +".csv"
          logger.info("Writing table into S3 to file: " + output_file)
          oracle_table\
              .repartition(1)\
              .write \
              .mode("overwrite")\
              .format("csv")\
              .option("header","true") \
              .save("s3a://201906/" + "11/" + table + "_" + time.strftime("%Y%m%d_%H%M%S") +".csv")
      

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              archenroot Ladislav Jech
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: