Description
Currently when creating a HiveConf in TableReader.scala, we are not passing s3 specific configurations (like aws s3 credentials) and spark.hadoop.* configurations set by the user. We should fix this issue.
Currently when creating a HiveConf in TableReader.scala, we are not passing s3 specific configurations (like aws s3 credentials) and spark.hadoop.* configurations set by the user. We should fix this issue.