-
Type:
Improvement
-
Status: Open
-
Priority:
Major
-
Resolution: Unresolved
-
Affects Version/s: 2.1.0, 2.4.3
-
Fix Version/s: None
-
Component/s: Structured Streaming
-
Labels:None
When we use Spark Streaming to consume records from kafka, the generated KafkaRDD‘s partition number is equal to kafka topic's partition number, so we can not use more cpu cores to execute the streaming task except we change the topic's partition number,but we can not increase the topic's partition number infinitely.
Now I think we can split a kafka partition into multiple KafkaRDD partitions, and we can config
it, then we can use more cpu cores to execute the streaming task.