Uploaded image for project: 'Apache Hudi'
  1. Apache Hudi
  2. HUDI-2249

[SQL] Changing index type fails

    XMLWordPrintableJSON

Details

    • Task
    • Status: Resolved
    • Blocker
    • Resolution: Invalid
    • None
    • None
    • None

    Description

      I tried to set a different index type and it failed. 

       

      ```

      set hoodie.index.type = SIMPLE

       

      spark-sql> create table hudi_17Gb_ext1 using hudi location 's3a://siva-test-bucket-june-16/hudi_testing/gh_arch_dump/hudi_5/' options ( 

               >   type = 'cow', 

               >   primaryKey = 'randomId', 

               >   preCombineField = 'date_col' 

               >  ) 

               > partitioned by (type) as select * from gh_17Gb_date_col;

      21/07/29 04:24:23 ERROR SparkSQLDriver: Failed in [create table hudi_17Gb_ext1 using hudi location 's3a://siva-test-bucket-june-16/hudi_testing/gh_arch_dump/hudi_5/' options ( 

        type = 'cow', 

        primaryKey = 'randomId', 

        preCombineField = 'date_col' 

       ) 

      partitioned by (type) as select * from gh_17Gb_date_col]

      java.lang.IllegalArgumentException: No enum constant org.apache.hudi.index.HoodieIndex.IndexType.SIMPLE

       

       

      describe hudi_17Gb_ext

      at java.lang.Enum.valueOf(Enum.java:238)

      at org.apache.hudi.index.HoodieIndex$IndexType.valueOf(HoodieIndex.java:106)

      at org.apache.hudi.config.HoodieIndexConfig$Builder.build(HoodieIndexConfig.java:333)

      at org.apache.hudi.config.HoodieWriteConfig$Builder.setDefaults(HoodieWriteConfig.java:1608)

      at org.apache.hudi.config.HoodieWriteConfig$Builder.build(HoodieWriteConfig.java:1650)

      at org.apache.hudi.DataSourceUtils.createHoodieConfig(DataSourceUtils.java:196)

      at org.apache.hudi.DataSourceUtils.createHoodieClient(DataSourceUtils.java:201)

      at org.apache.hudi.HoodieSparkSqlWriter$.$anonfun$write$5(HoodieSparkSqlWriter.scala:183)

      at scala.Option.getOrElse(Option.scala:189)

      at org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:182)

      at org.apache.spark.sql.hudi.command.InsertIntoHoodieTableCommand$.run(InsertIntoHoodieTableCommand.scala:97)

      at org.apache.spark.sql.hudi.command.CreateHoodieTableAsSelectCommand.run(CreateHoodieTableAsSelectCommand.scala:86)

      at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:108)

      at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:106)

      at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:120)

      at org.apache.spark.sql.Dataset.$anonfun$logicalPlan$1(Dataset.scala:229)

      at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3618)

      at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:100)

      at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:160)

      at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:87)

      at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)

      at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)

      at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3616)

      at org.apache.spark.sql.Dataset.<init>(Dataset.scala:229)

      at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:100)

      at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)

      at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:97)

      at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:607)

      at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)

      at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:602)

      at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:650)

      at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:63)

      at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:377)

      at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1(SparkSQLCLIDriver.scala:496)

      ```

       

      Attachments

        Activity

          People

            shivnarayan sivabalan narayanan
            shivnarayan sivabalan narayanan
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: