Uploaded image for project: 'HBase'
  1. HBase
  2. HBASE-23999

[flakey test] TestTableOutputFormatConnectionExhaust

    XMLWordPrintableJSON

Details

    • Test
    • Status: Resolved
    • Major
    • Resolution: Fixed
    • 2.3.0
    • 3.0.0-alpha-1, 2.3.0, 2.2.5
    • test
    • None
    • Reviewed

    Description

      Hit this during master startup sequence in the test.

      2020-03-16 23:40:37,298 ERROR [StoreOpener-1588230740-1] conf.Configuration(2980): error parsing conf hbase-site.xml
      com.ctc.wstx.exc.WstxEOFException: Unexpected EOF in prolog
       at [row,col,system-id]: [1,0,"file:/home/vagrant/repos/hbase/hbase-mapreduce/target/test-classes/hbase-site.xml"]
              at com.ctc.wstx.sr.StreamScanner.throwUnexpectedEOF(StreamScanner.java:687)
              at com.ctc.wstx.sr.BasicStreamReader.handleEOF(BasicStreamReader.java:2220)
              at com.ctc.wstx.sr.BasicStreamReader.nextFromProlog(BasicStreamReader.java:2126)
              at com.ctc.wstx.sr.BasicStreamReader.next(BasicStreamReader.java:1181)
              at org.apache.hadoop.conf.Configuration$Parser.parseNext(Configuration.java:3277)
              at org.apache.hadoop.conf.Configuration$Parser.parse(Configuration.java:3071)
              at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2964)
              at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2930)
              at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2805)
              at org.apache.hadoop.conf.Configuration.get(Configuration.java:1199)
              at org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1253)
              at org.apache.hadoop.conf.Configuration.getBoolean(Configuration.java:1659)
              at org.apache.hadoop.hbase.HBaseConfiguration.checkDefaultsVersion(HBaseConfiguration.java:70)
              at org.apache.hadoop.hbase.HBaseConfiguration.addHbaseResources(HBaseConfiguration.java:84)
              at org.apache.hadoop.hbase.HBaseConfiguration.create(HBaseConfiguration.java:98)
              at org.apache.hadoop.hbase.io.crypto.Context.<init>(Context.java:44)
              at org.apache.hadoop.hbase.io.crypto.Encryption$Context.<init>(Encryption.java:64)
              at org.apache.hadoop.hbase.io.crypto.Encryption$Context.<clinit>(Encryption.java:61)
              at org.apache.hadoop.hbase.regionserver.HStore.<init>(HStore.java:228)
              at org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:5890)
              at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1096)
              at org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1093)
              at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
              at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
              at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
              at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
              at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
              at java.base/java.lang.Thread.run(Thread.java:834)
      2020-03-16 23:40:37,301 ERROR [master/bionic:0:becomeActiveMaster] regionserver.HRegion(1137): Could not initialize all stores for the region=hbase:meta,,1.1588230740
      

      Looking at the file under target/test-classes, it looks like this is a file written by YARN.

      <?xml version="1.0" encoding="UTF-8" standalone="no"?><configuration>
      <property><name>yarn.log-aggregation.file-formats</name><value>TFile</value><final>false</final><source>yarn-default.xml</source></property>
      <property><name>hbase.master.mob.ttl.cleaner.period</name><value>86400</value><final>false</final><source>hbase-default.xml</source></property>
      <property><name>dfs.namenode.resource.check.interval</name><value>5000</value><final>false</final><source>hdfs-default.xml</source></property>
      <property><name>mapreduce.jobhistory.client.thread-count</name><value>10</value><final>false</final><source>mapred-default.xml</source></property>
      ...
      

      My guess is that we have something in the MR framework unconfigured, it's writing these temporary job files to some default (like the first class path location or something??) and parallel test runs are stomping on each other.

      Attachments

        Issue Links

          Activity

            People

              huaxiangsun Huaxiang Sun
              ndimiduk Nick Dimiduk
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: