Uploaded image for project: 'Hive'
  1. Hive
  2. HIVE-20271 Improve HoS query cancellation handling
  3. HIVE-20273

Spark jobs aren't cancelled if getSparkJobInfo or getSparkStagesInfo

    XMLWordPrintableJSON

Details

    • Sub-task
    • Status: Patch Available
    • Major
    • Resolution: Unresolved
    • None
    • None
    • Spark
    • None

    Description

      HIVE-19053 and HIVE-19733 added handling of InterruptedException to RemoteSparkJobStatus#getSparkJobInfo and RemoteSparkJobStatus#getSparkStagesInfo. Now, these methods catch InterruptedException and wrap the exception in a HiveException and then throw the new HiveException.

      This new HiveException is then caught in RemoteSparkJobMonitor#startMonitor which then looks for exceptions that match the condition:

      if (e instanceof InterruptedException ||
                      (e instanceof HiveException && e.getCause() instanceof InterruptedException))
      

      If this condition is met (in this case it is), the exception will again be wrapped in another HiveException and is thrown again. So the final exception is a HiveException that wraps a HiveException that wraps an InterruptedException.

      The double nesting of hive exception causes the logic in SparkTask#setSparkException to break, and doesn't cause killJob to get triggered.

      This causes interrupted Hive queries to not kill their corresponding Spark jobs.

      Attachments

        1. HIVE-20273.2.patch
          17 kB
          Sahil Takiar
        2. HIVE-20273.1.patch
          17 kB
          Sahil Takiar

        Issue Links

          Activity

            People

              Unassigned Unassigned
              stakiar Sahil Takiar
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

              Dates

                Created:
                Updated: