Skip to main content

AI Nodes

Integrate large language models from OpenAI, Anthropic, and local Ollama for intelligent automation.

AI & Machine Learning

Bring the power of large language models to your edge devices. Analyze sensor data, generate reports, detect anomalies, and create intelligent automation with natural language.

3 Providers
GPT-4o Latest Models
Local Ollama Support
Text Generation
Analysis & Reasoning
Embeddings & Search
Vision Image Analysis

openai

GPT-4o Vision Streaming

Connect to OpenAI GPT models for text generation, analysis, and embeddings. The most widely used AI API with excellent performance.

Models GPT-4o, GPT-4, GPT-3.5-turbo
Capabilities Chat, completion, embeddings
Vision Image analysis with GPT-4o
Functions Tool/function calling

Configuration

apiKey OpenAI API key
model "gpt-4o" | "gpt-4" | "gpt-3.5-turbo"
messages Chat messages array
temperature 0-2 (creativity level)
maxTokens Response length limit
systemPrompt System instructions

Output Example

{
  "payload": "AI response text",
  "usage": {
    "prompt_tokens": 50,
    "completion_tokens": 100
  }
}

Use GPT-4o for best quality/speed balance. Enable streaming for long responses. Cache responses in Redis to reduce API costs.

anthropic

Claude 3.5 200K Context Reasoning

Integrate Claude models for advanced reasoning and long-context understanding. Excellent for complex analysis tasks.

Models Claude 3.5 Sonnet, Opus, Haiku
Context Up to 200K tokens
Vision Image understanding
Safety Built-in content filtering

Configuration

apiKey Anthropic API key
model "claude-3-5-sonnet-20241022"
messages Conversation messages
maxTokens Response length (required)
systemPrompt System instructions
temperature 0-1 (default 1)

Output Example

{
  "payload": "Claude response",
  "stop_reason": "end_turn",
  "usage": {...}
}

Claude excels at analysis, coding, and long documents. Use Haiku for speed, Opus for complex reasoning, Sonnet for balanced performance.

ollama

100% Local Privacy Offline

Run local LLMs with Ollama for privacy-focused, offline AI processing. No data ever leaves your device.

Models Llama 3, Mistral, Phi, CodeLlama
Privacy 100% local processing
Offline Works without internet
Cost No per-token charges

Configuration

host http://localhost:11434 (default)
model "llama3" | "mistral" | "phi"
prompt Input prompt text
system System prompt
temperature Creativity (0-2)
context Conversation context array

Output Example

{
  "payload": "Local LLM response",
  "model": "llama3",
  "eval_count": 150
}

Install Ollama first: ollama.ai. Pull models with ollama pull llama3. For Raspberry Pi, use smaller models like phi or tinyllama.

AI-Powered IoT Use Cases

Transform your edge devices with intelligent automation

Anomaly Detection

Analyze sensor patterns and detect unusual behavior that might indicate equipment failure.

Report Generation

Automatically summarize daily sensor data into human-readable reports.

Log Analysis

Parse and understand error logs to identify root causes and suggest fixes.

Image Classification

Use vision models to classify camera images for quality control or security.

Natural Language Control

Control devices with voice or text: "Turn on the lights when it's dark".

Predictive Maintenance

Predict equipment failures before they happen using historical sensor data.

Cloud vs Local AI

Choose the right approach for your use case

Cloud (OpenAI, Anthropic)
+ Most powerful models (GPT-4o, Claude)
+ No local resources needed
+ Always up-to-date models
- Requires internet connection
- Per-token costs
Local (Ollama)
+ Complete privacy - data never leaves device
+ No ongoing costs
+ Works offline
- Requires decent hardware (4GB+ RAM)
- Smaller models (Llama 3, Mistral)

Quick Tips

Reduce Token Usage

Use function nodes to pre-process data before sending to AI models.

Cache Responses

Store AI responses in Redis to avoid redundant API calls for similar inputs.

Edge Devices

For Raspberry Pi, try Ollama with phi or tinyllama - optimized for edge devices.

API Key Security

Store API keys in environment variables, never hardcode them in flows.