Uploaded image for project: 'Phoenix'
  1. Phoenix
  2. PHOENIX-3196

Array Index Out Of Bounds Exception

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Critical
    • Resolution: Duplicate
    • 4.7.0
    • 4.7.0
    • None
    • None
    • Amazon EMR - 4.7.2

    Description

      Data Set Size - Table with 156 Million Rows and 200 Columns

      Seems like this issue is resolved in Phoenix 3.0. But its still recurring

      Phoenix throws the following exception -

      Error: org.apache.hadoop.hbase.DoNotRetryIOException: EPOEVENT: 18
      at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:87)
      at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:484)
      at org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:11705)
      at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7764)
      at org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1988)
      at org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1970)
      at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33652)
      at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2180)
      at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
      at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
      at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
      at java.lang.Thread.run(Thread.java:745)
      Caused by: java.lang.ArrayIndexOutOfBoundsException: 18
      at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:403)
      at org.apache.phoenix.schema.PTableImpl.<init>(PTableImpl.java:315)
      at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:303)
      at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:883)
      at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:501)
      at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2481)
      at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2426)
      at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.addIndexToTable(MetaDataEndpointImpl.java:565)
      at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:860)
      at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:501)
      at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2481)
      at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2426)
      at org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:451)
      ... 10 more

      To reproduce -
      1) Create Table
      2) While creating indexes create an index with multiple occurances of same column name - Phoenix throws an error stating that the column name is used multiple times
      3) Correct it and try to run the index creation again.

      Please note that - One of the columns on which the Index was being created is a "BigInt"

      Not sure, if running a faulty index creation DDL is the root cause of this exception. But started seeing this after doing the above steps.

      Effects -
      1) Unable to read and write. All queries will throw the same exception as above

      Attachments

        Issue Links

          Activity

            People

              Unassigned Unassigned
              nvommi Nithin
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: