Description
(this is a mirror of MESOS-2985)
By default, spark.executor.memory is set to the min(slave_ram_kb, master_ram_kb); when using the same instance type for master and workers you will not notice, but when using different ones (which makes sense, as the master cannot be a spot instance, and using a big machine for the master would be a waste of resources) the default amount of memory given to each worker is capped to the amount of RAM available on the master (ex: if you create a cluster with an m1.small master (1.7GB RAM) and one m1.large worker (7.5GB RAM), spark.executor.memory will be set to 512MB).