Uploaded image for project: 'IMPALA'
  1. IMPALA
  2. IMPALA-9187

TestExecutorGroups.test_executor_group_shutdown is flaky

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Major
    • Resolution: Duplicate
    • None
    • None
    • None
    • ghx-label-13

    Description

      The following test is flaky:

      custom_cluster.test_executor_groups.TestExecutorGroups.test_executor_group_shutdown (from pytest)

      Error Message

      AssertionError: Query (id=6c4bb1c6f501bae4:ee49118300000000): DEBUG MODE WARNING: Query profile created while running a DEBUG build of Impala. Use RELEASE builds to measure query performance. Summary: Session ID: 104c00e26afad563:fad6988e52bf9cba Session Type: BEESWAX Start Time: 2019-11-22 00:19:26.497324000 End Time: Query Type: QUERY Query State: COMPILED Query Status: OK Impala Version: impalad version 3.4.0-SNAPSHOT DEBUG (build 2bdca39a8b178b7186dd24141a8e97fa0c46358f) User: jenkins Connected User: jenkins Delegated User: Network Address: 127.0.0.1:59977 Default Db: default Sql Statement: select sleep(3) Coordinator: []:22000 Query Options (set by configuration): TIMEZONE=America/Los_Angeles,CLIENT_IDENTIFIER=custom_cluster/test_executor_groups.py::TestExecutorGroups::()::test_executor_group_shutdown Query Options (set by configuration and planner): NUM_NODES=1,NUM_SCANNER_THREADS=1,RUNTIME_FILTER_MODE=0,MT_DOP=0,TIMEZONE=America/Los_Angeles,CLIENT_IDENTIFIER=custom_cluster/test_executor_groups.py::TestExecutorGroups::()::test_executor_group_shutdown Plan: ---------------- Max Per-Host Resource Reservation: Memory=0B Threads=1 Per-Host Resource Estimates: Memory=10MB Dedicated Coordinator Resource Estimate: Memory=100MB Codegen disabled by planner Analyzed query: SELECT sleep(CAST(3 AS INT)) F00:PLAN FRAGMENT [UNPARTITIONED] hosts=1 instances=1 | Per-Host Resources: mem-estimate=0B mem-reservation=0B thread-reservation=1 PLAN-ROOT SINK | output exprs: sleep(3) | mem-estimate=0B mem-reservation=0B thread-reservation=0 | 00:UNION constant-operands=1 mem-estimate=0B mem-reservation=0B thread-reservation=0 tuple-ids=0 row-size=1B cardinality=1 in pipelines: <none> ---------------- Estimated Per-Host Mem: 10485760 Request Pool: default-pool Per Host Min Memory Reservation: []:22000(0) Per Host Number of Fragment Instances: []:22000(1) Admission result: Queued Query Compilation: 5.077ms - Metadata of all 0 tables cached: 679.990us (679.990us) - Analysis finished: 1.269ms (589.508us) - Authorization finished (noop): 1.350ms (81.387us) - Value transfer graph computed: 1.681ms (330.356us) - Single node plan created: 1.801ms (120.709us) - Distributed plan created: 1.880ms (78.868us) - Planning finished: 5.077ms (3.196ms) Query Timeline: 11.000ms - Query submitted: 0.000ns (0.000ns) - Planning finished: 7.000ms (7.000ms) - Submit for admission: 9.000ms (2.000ms) - Queued: 11.000ms (2.000ms) - ComputeScanRangeAssignmentTimer: 0.000ns Frontend: ImpalaServer: - ClientFetchWaitTimer: 0.000ns - NumRowsFetched: 0 (0) - NumRowsFetchedFromCache: 0 (0) - RowMaterializationRate: 0 - RowMaterializationTimer: 0.000ns assert 'Initial admission queue reason: number of running queries' in 'Query (id=6c4bb1c6f501bae4:ee49118300000000):\n DEBUG MODE WARNING: Query profile created while running a DEBUG buil...0)\n - NumRowsFetchedFromCache: 0 (0)\n - RowMaterializationRate: 0\n - RowMaterializationTimer: 0.000ns\n'
      

      Stacktrace

      custom_cluster/test_executor_groups.py:185: in test_executor_group_shutdown assert "Initial admission queue reason: number of running queries" in profile, profile E AssertionError: Query (id=6c4bb1c6f501bae4:ee49118300000000): E DEBUG MODE WARNING: Query profile created while running a DEBUG build of Impala. Use RELEASE builds to measure query performance. E Summary: E Session ID: 104c00e26afad563:fad6988e52bf9cba E Session Type: BEESWAX E Start Time: 2019-11-22 00:19:26.497324000 E End Time: E Query Type: QUERY E Query State: COMPILED E Query Status: OK
      

      Attachments

        Issue Links

          Activity

            People

              lv Lars Volker
              stakiar Sahil Takiar
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved: