Details
-
Improvement
-
Status: Open
-
P3
-
Resolution: Unresolved
-
2.11.0
-
None
-
None
Description
I have a streaming job inserting records into an Elasticsearch cluster. I set the batch size appropriately big, but I found out this is not causing any effect at all: I found that all elements are inserted in batches of 1 or 2 elements.
The reason seems to be that this is a streaming pipeline, which may result in tiny bundles. Since ElasticsearchIO uses `@FinishBundle` to flush a batch, this will result in equally small batches.
This results in a huge amount of bulk requests with just one element, grinding the Elasticsearch cluster to a halt.
I have now been able to work around this by using a `GroupIntoBatches` operation before the insert, but this results in 3 steps (mapping to a key, applying GroupIntoBatches, stripping key and outputting all collected elements), making the process quite awkward.
A much better approach would be to internalize this into the ElasticsearchIO write transform.. Use a timer that flushes the batch at batch size or end of window, not at the end of a bundle.