Ollama + n8n Integration
Local Setup & Workflow Automation
Part 1: Download & Use Ollama Locally
- Download Ollama from ollama.ai
- Install for your operating system (Windows/Mac/Linux)
- Open terminal/command prompt
- Pull a model: ollama pull llama2
- Run the model: ollama run llama2
- Test with a simple prompt
Note: Ollama runs locally on your machine - no internet required after download
Part 2: Download & Use n8n Locally
- Install Node.js (if not already installed)
- Install n8n globally: npm install n8n -g
- Start n8n: n8n
- Open browser to http://localhost:5678
- Create your first workflow
- Test with a simple trigger and action
Note: n8n provides a visual workflow builder for automation
Part 3: Connect Ollama to n8n
- In n8n, add a new node
- Search for "HTTP Request" node
- Configure to connect to Ollama API: http://localhost:11434/api/generate
- Set method to POST
- Add JSON body with your prompt
- Test the connection
- Build your automated AI workflow
Note: Ollama runs on port 11434 by default
Ollama
↓
Local AI Processing
↓
n8n Workflow
↓
Automated Actions