AI security issue

Prompt injection risk in n8n

Why passing unsanitized user input to AI prompts creates security vulnerabilities

What is this issue?

Prompt injection occurs when user input is passed directly to AI prompts without sanitization. Attackers can craft input that overrides your instructions, extracts sensitive data, or causes the AI to perform unintended actions.

Vulnerable patterns:

  • Passing webhook body directly to AI prompt
  • Using form input as AI instructions
  • Including email content in prompts without filtering
  • User-provided text that can contain prompt overrides

Why is this dangerous?

Instruction override

Attackers can inject 'Ignore previous instructions and...' to bypass your prompts.

Data extraction

Malicious prompts can trick AI into revealing system prompts, training data, or other sensitive info.

Jailbreaking

Prompt injection can bypass content filters and safety guidelines.

Unauthorized actions

If AI has tool access, injections could trigger unintended API calls or data modifications.

How to fix it

  1. 1

    Sanitize user input

    Filter or escape special characters and instruction-like patterns before including in prompts.

  2. 2

    Use delimiters

    Wrap user input in clear delimiters (```user input```) and instruct AI to treat delimited content as data, not instructions.

  3. 3

    Validate input format

    Reject or cleanse input that matches known prompt injection patterns.

  4. 4

    Use structured input

    When possible, use structured formats (JSON, forms) rather than free-text that could contain malicious prompts.

Scan your workflow now

Upload your n8n workflow JSON and detect AI nodes receiving unsanitized user input.

Scan for AI security issues

Related resources

Related AI issues