Uploaded image for project: 'Hive'
  1. Hive
  2. HIVE-27414

HiveServer2 is not shut down properly in OOM situations when HoS is used

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Open
    • Minor
    • Resolution: Unresolved
    • 1.2.0, 2.0.0, 2.1.1, 2.2.0, 2.3.0, 3.1.0, 2.4.0, 3.0.0
    • None
    • HiveServer2

    Description

      When an OOM happens, the HiveServer2.stop() is called from the oomHook. That shuts down the HS2, however if the default execution engine is not "spark" but the HiveOnSpark execution engine has been used by any session before, then the check in the stop() is not properly evaluated and hence it does not shut down the SparkSessionManagerImpl properly:

      if (hiveConf != null && hiveConf.getVar(ConfVars.HIVE_EXECUTION_ENGINE).equals("spark")) {
        try {
          SparkSessionManagerImpl.getInstance().shutdown();
        } catch(Exception ex) {
          LOG.error("Spark session pool manager failed to stop during HiveServer2 shutdown.", ex);
        }
      }
      

      This leaves back a "user" (non-daemon) thread related to the SparkSessionManagerImpl, so the JVM does not exit, the HS2 is unresponsive to user requests as the CLI service and thrift service has been stopped before.
      Due to HIVE-24411 the OOM is not handled well, so even the -XX:OnOutOfMemoryError flag is useless and the JVM is not killed.

      The HiveServer2.stop() method should use a check like SparkSessionManagerImpl.isInited().
      (this method does not exist - just gave as an example)

      The master branch does not have SparkSessionManagerImpl anymore, this is applicable only to the earlier versions.

      Attachments

        Activity

          People

            Unassigned Unassigned
            mszurap Miklos Szurap
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated: