Performance issue

Large JSON parsing in n8n

Why processing large JSON payloads can exhaust memory and crash your workflow

What is this issue?

When n8n parses very large JSON responses (typically over 10MB), it loads the entire payload into memory. This can cause out-of-memory errors, especially when combined with parallel executions or loops.

Common sources of large JSON:

  • API endpoints returning massive datasets without pagination
  • Database queries returning all records at once
  • File contents loaded as base64 in JSON
  • Nested objects with deeply recursive structures

Why is this dangerous?

Memory exhaustion

Large JSON payloads consume heap memory, potentially crashing the n8n process.

Slow parsing

JSON.parse() is blocking—large files freeze the entire execution while parsing.

Cascading failures

Memory pressure affects all concurrent workflows, not just the one with large data.

Unpredictable crashes

OOM errors can occur at random points, making debugging difficult.

How to fix it

  1. 1

    Request paginated data

    Use API pagination to fetch data in smaller chunks instead of all at once.

  2. 2

    Filter at source

    Add query parameters to limit fields and records returned by APIs or databases.

  3. 3

    Stream large files

    For file processing, use streaming approaches instead of loading entire files into memory.

  4. 4

    Increase memory limits

    If large payloads are unavoidable, increase NODE_OPTIONS='--max-old-space-size=4096'.

Scan your workflow now

Upload your n8n workflow JSON and detect nodes that may be handling large JSON payloads.

Scan for data issues

Related resources

Related performance issues