Inefficient batch size in n8n
Why processing items one by one dramatically slows down your workflows
What is this issue?
When the SplitInBatches node is configured with a batch size of 1, each item is processed individually. This maximizes overhead from node execution, API calls, and database operations, making workflows much slower than necessary.
Inefficient patterns:
•SplitInBatches with batch size of 1•Individual INSERT statements instead of bulk insert•Single HTTP request per record instead of batch endpoint•One-by-one processing of thousands of items
Why is this dangerous?
Slow execution
Processing 1000 items individually can take 10-100x longer than batched processing.
API rate limits
Many APIs have rate limits that you'll quickly hit when making individual requests.
Resource exhaustion
Each iteration consumes memory and CPU, potentially overwhelming n8n.
Higher costs
More execution time means higher hosting costs and more API quota consumption.
How to fix it
- 1
Increase batch size
Set SplitInBatches to process 10-50 items at a time instead of 1.
- 2
Use bulk operations
Replace single INSERT/UPDATE with bulk operations that handle multiple records at once.
- 3
Use batch API endpoints
Many APIs offer batch endpoints that accept multiple items in one request—use them.
- 4
Process without batching
If all items are independent, consider removing SplitInBatches entirely and processing all at once.
Scan your workflow now
Upload your n8n workflow JSON and detect inefficient batch configurations.