Data bloat risk in n8n
Why saving all execution data can quickly fill your database and crash n8n
What is this issue?
When n8n is configured to save all execution data (success and error), the database grows continuously. High-frequency workflows can generate gigabytes of data per day, eventually causing slowdowns and crashes.
Warning signs:
•n8n UI becoming slow to load execution history•Database storage increasing rapidly•Backup times getting longer•n8n crashing during high-volume periods
Why is this dangerous?
Database exhaustion
The database can run out of space, causing n8n to crash and stop processing workflows.
Performance degradation
Large tables slow down all database operations, making n8n increasingly sluggish.
Increased costs
Cloud database storage is expensive. Unnecessary data increases your hosting bills.
Backup complications
Large databases take longer to backup and restore, increasing your RPO/RTO.
How to fix it
- 1
Disable saving for production workflows
For high-frequency production workflows, disable 'Save successful executions' to avoid data accumulation.
- 2
Configure data pruning
Set EXECUTIONS_DATA_PRUNE=true and EXECUTIONS_DATA_MAX_AGE to automatically delete old executions.
- 3
Keep only errors
Set 'Save failed executions' to true but 'Save successful executions' to false for most workflows.
- 4
Use external logging
For audit trails, send execution data to external logging systems (Elasticsearch, CloudWatch) instead of n8n's database.
Scan your workflow now
Upload your n8n workflow JSON and detect data bloat risks and execution storage configuration issues.