Details
-
Bug
-
Status: Resolved
-
P1
-
Resolution: Duplicate
-
None
-
None
Description
To reproduce:
- Startup a local Kafka cluster: https://kafka.apache.org/quickstart
- Setup topics:
bin/kafka-console-consumer.sh --topic mytopic1 --from-beginning --bootstrap-server localhost:9092
bin/kafka-console-consumer.sh --topic mytopic2 --from-beginning --bootstrap-server localhost:9092
- Setup a Beam virtualenv and run a pipeline that reads from Kafka. For example: https://wtools.io/paste-code/b4je
> python ./pipeline.py --bootstrap_servers=localhost:9092 --in_topic=mytopic1 --out_topic=mytopic2 --runner=FlinkRunner --streaming
- Publish messages as kv pairs.
./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic mytopic1 --property "parse.key=true" --property "key.separator=:"
>a:b
>c:d
>e:f
- Messages do not get pushed to subsequent steps.
- Following seems to be working fine.
* X-lang Bounded read with Flink
* X-lang Kafka sink and with Flink
boyuanz could you take a look to rule out any SDF/unbounded read related issues ?
cc: mxm and angoenka for Flink issues.
Beam user thread: https://lists.apache.org/x/thread.html/r9c74a8a7efa4b14f2f1d5f77ce8f12128cf1071861c1627f00702415@%3Cuser.beam.apache.org%3E
Attachments
Issue Links
- duplicates
-
BEAM-11993 ReadFromKafka doesn’t send data to the next PTransform – Apache Flink "Cluster" – Apache Beam Python SDK
- Resolved
- is duplicated by
-
BEAM-11998 Portable runners should be able to issue checkpoints to Splittable DoFn
- Open