Uploaded image for project: 'Spark'
  1. Spark
  2. SPARK-13143

EC2 cluster silently not destroyed for non-default regions

    XMLWordPrintableJSON

Details

    • Bug
    • Status: Resolved
    • Minor
    • Resolution: Won't Fix
    • 1.5.0
    • None
    • EC2

    Description

      If you start a cluster in a non-default region using the EC2 scripts and then try to destroy it, you get the message:

      Terminating master...
      Terminating slaves...

      after which the script terminates with no further info.

      This leaves the instances still running without ever informing the user.

      The reason this happens is that the destroy action in spark_ec2.py calls get_existing_cluster with the die_on_error argument set to False for some reason.

      I'll submit a PR for this.

      Attachments

        Activity

          People

            Unassigned Unassigned
            tvas Theodore Vasiloudis
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: