How to Connect n8n to OpenAI: Complete Integration Guide (2026)

Introduction: Why Connect n8n to OpenAI?
You already know that n8n is one of the most powerful automation tools available. With over 400 built-in integrations, native AI nodes, and a fair-code license that puts you in control, it is no wonder businesses are moving away from per-execution pricing models like Zapier or Make.
But here is where things get really interesting: when you connect n8n to OpenAI, your workflows stop being simple if-this-then-that automations and start becoming intelligent. You can generate content, summarise documents, analyse customer messages, classify leads, translate emails, and even build AI agents that remember past conversations.
The best part? n8n gives you complete control. You plug in your own OpenAI API key, pay OpenAI's direct rates (no markup), and run everything from your own infrastructure.
Where you host n8n affects reliability, maintenance burden, scalability, and cost - which is why choosing the right environment matters. In this guide, we walk through everything you need to know to get your n8n and OpenAI integration up and running smoothly.
What You Can Build with n8n + OpenAI
Before we get into the technical steps, let's look at what is possible. The n8n and OpenAI integration opens up a wide range of automation possibilities:
| Use Case | What It Does |
|---|---|
| Customer support automation | Draft replies to incoming support tickets, categorise messages by urgency, and suggest resolutions |
| Content generation | Generate blog outlines, social media posts, product descriptions, and email newsletters |
| Lead qualification | Analyse form submissions, classify leads by intent, and route them to the right salesperson |
| Document summarisation | Take long PDFs, transcripts, or reports and generate concise summaries |
| AI-powered chatbots | Build conversational agents that remember context and can search the web |
| Translation & localisation | Automatically translate customer messages, product listings, or internal communications |
| Sentiment analysis | Monitor customer feedback and flag negative comments for immediate follow-up |
These are just starting points. Once you understand the building blocks, you can create almost anything.
Prerequisites
Before you begin, make sure you have:
- A running n8n instance - either self-hosted on a VPS or using a managed platform. For a deeper look at the trade-offs, check out our n8n VPS vs managed hosting guide.
- An OpenAI account with API access (sign up at platform.openai.com)
- A basic understanding of n8n workflows (triggers, nodes, and connections)
Important: A ChatGPT Plus subscription ($20/month) does not give you API credits. The OpenAI API is billed separately on a pay-as-you-go basis. You need to add a payment method to your OpenAI account before n8n can send requests.
Step 1: Get Your OpenAI API Key
If you have not already, here is how to get your API key. The n8n docs outline a straightforward process:
- Go to platform.openai.com and sign in (or create an account).
- Navigate to API Keys in the left sidebar.
- Click Create new secret key.
- Give it a name (for example, n8n production) and choose the permissions you need.
- Copy the key immediately - OpenAI will not show it again.
Security tip: Store your API key securely. Never commit it to GitHub or share it in logs. In n8n, you'll store it in the credentials manager, which encrypts it automatically.
Step 2: Set Up OpenAI Credentials in n8n
Now, let's add your API key to n8n.
- In your n8n instance, go to Settings -> Credentials.
- Click Add Credential.
- Search for OpenAI (or OpenAI Chat Model, depending on your n8n version).
- Paste your API key into the appropriate field.
- Optionally, add an Organisation ID if you belong to one.
- Click Save.
The credential should now be available for any OpenAI node in your workflows. If you plan to use multiple models (for example, GPT-4o for complex tasks and GPT-4o-mini for cheaper operations), you can reuse the same credential.
Step 3: Two Ways to Call OpenAI in n8n
There are two main approaches to integrating OpenAI into your workflows. Understanding both helps you choose the right one for your use case.
Approach A: The HTTP Request Node (Full Control)
The HTTP Request node gives you complete flexibility. You can call any OpenAI endpoint - Chat Completions, Completions, Embeddings, Moderation, and more - with custom headers and payloads.
Pros: Maximum control, works with any API endpoint, no node updates required.
Cons: Requires you to build the request payload manually and parse the response.
Example payload for Chat Completions:
{
"model": "gpt-4o-mini",
"messages": [
{ "role": "system", "content": "You are a helpful assistant." },
{ "role": "user", "content": "Summarise this email: {{$json.email_body}}" }
],
"temperature": 0.7
}This approach is great for developers who want fine-grained control over every aspect of the API call.
Approach B: The Built-in OpenAI Nodes (Simpler)
n8n provides dedicated nodes that wrap the OpenAI API:
- OpenAI Chat Model - For conversational and instruction-following tasks
- OpenAI Node (legacy) - For older completions and edit operations
These nodes simplify configuration. You select the model, enter your prompt, and n8n handles the rest.
Pros: Faster to set up, less error-prone, automatically handles authentication.
Cons: Limited to what the node supports (newer endpoints may not be available immediately).
For most users, the built-in OpenAI Chat Model node is the easiest starting point.
Step 4: Build Your First AI-Powered Workflow
Let's build a simple but useful workflow: automatic email summarisation.
Workflow Overview
- Trigger: When a new email arrives (for example, via IMAP or Gmail node)
- Process: Extract the email body
- AI Step: Send the email body to OpenAI with a summarisation prompt
- Action: Save the summary to Google Sheets or send it to Slack
Step-by-Step
- Add a trigger node – Choose Gmail Trigger (or Email Trigger) to watch for new emails.
- Extract the email body – Use an Item Lists node to grab the text from the email.
- Add an OpenAI Chat Model node:
- Select the credential you created earlier
- Choose model:
gpt-4o-mini(great balance of cost and quality for summarisation) - System prompt:
You are a helpful assistant who summarises emails concisely. - User prompt:
Summarise this email: {{$json.email_body}}
- Send the summary – Add a Slack node to post the summary to a channel, or a Google Sheets node to log it.
That’s it. Now every time a new email arrives, you’ll get a clean summary – no more reading long threads.
Step 5: Building an AI Agent with Memory and Tools
For more advanced use cases - like a chatbot that remembers previous conversations or can search the web - you'll want to use n8n's AI Agent node.
The AI Agent node acts as the brain of your workflow. It orchestrates the language model, memory, and external tools to handle complex tasks.
Components of an AI Agent
| Component | Role |
|---|---|
| AI Agent node | Orchestrates the entire process - decides when to use memory, when to call tools, and what response to generate |
| OpenAI Chat Model | The language model that does the reasoning and response generation |
| Memory node | Stores conversation history so the agent can refer back to previous messages |
| Tools | External actions the agent can take (for example, web search, database lookup, email sending) |
Example: A Context-Aware Chatbot with Web Search
Let's say you want a chatbot that can remember what you've already discussed and search the web for current information when needed.
- Start with a Chat Trigger - This node listens for incoming messages from your chat interface (for example, a web widget or Slack).
- Add an AI Agent node - This is the orchestrator.
- Connect an OpenAI Chat Model - Choose
gpt-4oorgpt-4o-minias the reasoning model. - Add a Memory node - The Simple Memory node stores recent conversation turns, so the agent knows what you talked about earlier.
- Add a Tool - The HTTP Request node, configured to call SerpAPI (or any search API), gives the agent the ability to fetch live data from the web.
Now your agent can handle questions like, What was that link you shared earlier? (memory) and What's the weather like in Tokyo today? (web search) in the same conversation.
Best practice: Consult memory first, then use tools selectively, and always summarise external results instead of returning raw search output.
Step 6: Real-World Use Cases
Here are a few practical workflows you can build today:
1. AI-Powered Lead Qualification
- Trigger: New form submission from your website (for example, Typeform or Webhook node)
- Process: Send the form data to OpenAI with a prompt to classify the lead (for example, hot, warm, cold)
- Action: Route the lead to the appropriate CRM pipeline or notify the right salesperson
2. Meeting Summariser
- Trigger: Meeting transcript uploaded to Google Drive or received via email
- Process: Send the transcript to OpenAI with a prompt to generate key points, action items, and decisions
- Action: Create a Google Doc with the summary and email it to all participants
3. Multi-Language Customer Support
- Trigger: New support ticket in a non-English language
- Process: Detect the language, translate to English using OpenAI, analyse sentiment, then translate the reply back
- Action: Post the translated reply to the ticket system
These workflows can run fully automatically, saving hours of manual work every week.
Step 7: Where to Host Your n8n Instance to Run This 24/7
All of these powerful automations depend on one thing: your n8n instance needs to be online and available around the clock. If your n8n instance goes offline, your AI workflows stop running - webhooks are missed, leads go unqualified, and support tickets pile up.
This is where your choice of hosting becomes critical.
The Hosting Options at a Glance
| Option | Best For | Maintenance Responsibility |
|---|---|---|
| Self-hosted VPS (Hetzner, DigitalOcean) | Developers who enjoy infrastructure work | You handle everything: updates, security, backups, SSL |
| Platform as a Service (Railway, Render) | Developers who want code control without server management | You manage environment variables and config; the platform handles the server |
| Managed hosting (Agntable, n8n Cloud) | Anyone who wants zero maintenance and 24/7 reliability | Provider handles everything |
Why Managed Hosting Makes Sense for Always-On AI Workflows
When you're running AI agents that interact with customers or trigger based on webhooks, reliability is not optional. A self-hosted VPS might cost only $4/month, but the hidden costs - your time for setup, security patches, backup verification, and incident response - can easily reach $150-250/month.
With a managed platform like Agntable, you get:
- 24/7 uptime monitoring - automatic issue resolution ensures your n8n instance stays online
- Built-in SSL and automatic updates - no manual certificate renewals or security patches
- Daily backups - your workflows and credentials are never lost
- Dedicated resources - no noisy neighbour performance issues
Deploying a production-ready n8n instance with managed n8n hosting from Agntable takes minutes, not hours of YAML debugging. You can try it risk-free with a 7-day free trial.
Bottom line: If your AI workflows are important to your business or personal productivity, choosing a managed hosting option saves you time and gives you peace of mind.
Step 8: Cost Estimation
One of the biggest advantages of using n8n with your own OpenAI API key is that you pay direct OpenAI rates - no markup.
| Model | Approximate Cost per 1M tokens (input/output) |
|---|---|
| gpt-4o | ~$2.50 / $10.00 |
| gpt-4o-mini | ~$0.15 / $0.60 |
| gpt-4.1 | ~$5.00 / $20.00 |
For most real-world automations, the cost is surprisingly low:
- Email summarisation:
~500 tokensper email ->$0.0002-$0.002per email - Chatbot conversation (5-10 turns):
~2,000 tokens->$0.001-$0.01per conversation - Document summarisation (10 pages):
~10,000 tokens->$0.01-$0.05per document
Tip: Start with gpt-4o-mini. It's fast, cheap, and handles most summarisation, classification, and extraction tasks very well. Reserve GPT-4o for tasks that require complex reasoning or creative writing.
Step 9: Common Troubleshooting
Even with the right setup, things can go wrong. Here are the most common issues and how to fix them.
OPENAI_API_KEY environment variable is missing or empty
Why it happens: n8n cannot find your API key. This usually means the credential was not saved correctly or you are using an older node that expects an environment variable.
Fix: Go to Settings -> Credentials, check that your OpenAI credential is correctly configured and connected to the node. If you are using a community node that requires an environment variable, set it in your n8n configuration file.
You exceeded your current quota / too many requests
Why it happens: Your OpenAI account has no remaining credits or you have hit your rate limit. A ChatGPT Plus subscription does not give you API credits - the API is billed separately.
Fix: Go to platform.openai.com/account/billing, add a payment method, and set up pay-as-you-go billing. Once that is done, n8n can send requests again.
The resource you are requesting could not be found
Why it happens: You are trying to use a model name that does not exist or is not available to your account (for example, gpt-5).
Fix: Check the OpenAI models documentation for the correct model names. Use gpt-4o, gpt-4o-mini, or gpt-4.1 for current stable models.
Silent failures in the chat node
Why it happens: The data being passed to the OpenAI node is malformed or missing required fields.
Fix: Use the Execute Workflow tab to trace the data flow through each node. Look for missing or malformed inputs before they reach the OpenAI node.
Best Practices for Production
Once your workflow is working, follow these best practices to keep it reliable:
- Store API keys in n8n's encrypted credentials store - never hardcode them in nodes or environment variables.
- Avoid logging raw credentials or exposing them in node output. Use n8n's built-in expression editor to mask sensitive data.
- Set up error handling - use the Error Workflow feature to catch failed OpenAI calls and retry or log them.
- Monitor usage - set a monthly budget in your OpenAI account to avoid surprise bills.
- Start with cheaper models - use
gpt-4o-minifor testing and simple tasks, then upgrade togpt-4oonly when needed.
Conclusion: Your Next Step
Connecting n8n to OpenAI opens up a world of intelligent automation. Whether you are summarising emails, building chatbots, or classifying leads, the combination is powerful and cost-effective.
The best part? n8n gives you complete control. You choose where to host it, you control your API keys, and you pay only for what you use.
If you are still setting up your n8n environment, check out our best n8n hosting providers guide to find the right hosting option for your needs. Already running n8n but running into Docker issues? Our n8n Docker setup guide covers the five most common failure points and how to fix them.
Now go build something intelligent.
Frequently Asked Questions
Q: Do I need a ChatGPT Plus subscription to use OpenAI in n8n?
No. ChatGPT Plus and the OpenAI API are completely separate. The API is billed on a pay-as-you-go basis, while ChatGPT Plus is a fixed monthly subscription for using ChatGPT in the browser or mobile app.
Q: Which OpenAI model should I start with?
Start with gpt-4o-mini. It is fast, very cheap, and capable enough for most summarisation, classification, and extraction tasks. Reserve GPT-4o for complex reasoning or creative writing.
Q: Can I use other AI providers with n8n?
Yes. n8n supports Anthropic (Claude), Google Gemini, and local models via Ollama, as well as any OpenAI-compatible API.
Q: What's the difference between the HTTP Request node and the built-in OpenAI nodes?
The HTTP Request node gives you full control over the API call - you build the payload and parse the response. The built-in nodes are simpler but limited to what n8n supports out of the box.
Q: How do I handle long conversations or large documents?
Use a Memory node (like Simple Memory) to store conversation context. For large documents, consider breaking them into smaller chunks and processing them in sequence, or using a model with a larger context window (for example, gpt-4o has a 128k context window).