Login
Back to Blog
"Dify AI Platform Complete Guide: Build LLM Apps Without Code in 2026"

"Dify AI Platform Complete Guide: Build LLM Apps Without Code in 2026"

C
Crazyrouter Team
February 27, 2026
111 viewsEnglishTutorial
Share:

Building AI-powered applications used to require deep expertise in prompt engineering, API integration, and backend development. Dify changes that. It's an open-source platform that lets you build LLM applications — chatbots, AI agents, RAG pipelines, and complex workflows — using a visual drag-and-drop interface.

Whether you're a developer looking to prototype faster or a non-technical team member who wants to build AI tools, this guide covers everything you need to know about Dify in 2026.

What is Dify?#

Dify (short for "Do It For You") is an open-source LLM application development platform. Think of it as a visual IDE for building AI applications. Instead of writing boilerplate code to handle prompt templates, API calls, context management, and conversation memory, Dify provides a visual canvas where you wire these components together.

The platform supports:

  • Chatbot applications with conversation memory and context
  • Text generation apps for content creation, summarization, and translation
  • AI agents that can use tools, search the web, and execute multi-step reasoning
  • Complex workflows with branching logic, loops, and conditional execution
  • RAG (Retrieval-Augmented Generation) pipelines with built-in knowledge base management

Dify is backed by a strong open-source community with over 90,000 GitHub stars and is used by thousands of companies worldwide.

Key Features#

Visual Workflow Builder#

Dify's workflow builder is its standout feature. You can create complex AI pipelines by dragging and connecting nodes on a canvas:

  • LLM nodes: Call any supported language model
  • Knowledge retrieval nodes: Query your uploaded documents
  • Code nodes: Run custom Python or JavaScript
  • Conditional nodes: Branch logic based on outputs
  • HTTP request nodes: Call external APIs
  • Variable aggregator: Combine outputs from multiple branches
  • Iteration nodes: Loop over arrays of data

RAG Pipeline#

Dify includes a complete RAG (Retrieval-Augmented Generation) system:

  • Upload documents (PDF, Word, TXT, Markdown, HTML)
  • Automatic chunking with configurable strategies
  • Multiple embedding model support
  • Vector database integration (Weaviate, Qdrant, Pinecone, pgvector)
  • Hybrid search (semantic + keyword)
  • Citation tracking in responses

Agent Framework#

Build AI agents that can reason and use tools:

  • Function calling with custom tool definitions
  • Built-in tools: web search, calculator, weather, Wikipedia
  • Custom API tool integration
  • ReAct and Function Call agent strategies
  • Multi-step reasoning with tool use

Model Support#

Dify supports virtually every major LLM provider:

  • OpenAI (GPT-4o, GPT-4, GPT-3.5)
  • Anthropic (Claude 3.5 Sonnet, Claude 3 Opus)
  • Google (Gemini Pro, Gemini Ultra)
  • Meta (Llama 3)
  • Mistral, Cohere, and many more
  • Any OpenAI-compatible API endpoint

This last point is key — any service that exposes an OpenAI-compatible API works with Dify, which is exactly how Crazyrouter integrates.

How to Set Up Dify#

The fastest way to get Dify running is with Docker Compose:

bash
# Clone the repository
git clone https://github.com/langgenius/dify.git
cd dify/docker

# Copy environment configuration
cp .env.example .env

# Start all services
docker compose up -d

This spins up the full Dify stack: web frontend, API server, worker, PostgreSQL, Redis, Weaviate (vector DB), and Nginx. Access the dashboard at http://localhost/install to create your admin account.

System requirements for self-hosting:

  • Minimum: 2 CPU cores, 4GB RAM
  • Recommended: 4 CPU cores, 8GB RAM
  • Docker and Docker Compose installed

Option 2: Dify Cloud#

If you don't want to manage infrastructure, Dify offers a hosted cloud version at cloud.dify.ai. Sign up and start building immediately — no setup required.

Connecting Dify to 300+ AI Models via Crazyrouter#

Here's where it gets interesting. Instead of configuring separate API keys for OpenAI, Anthropic, Google, and every other provider, you can connect Dify to Crazyrouter and get access to 300+ models through a single API key.

Step-by-Step Setup#

  1. Get your Crazyrouter API key at crazyrouter.com

  2. Add Crazyrouter as a model provider in Dify:

    • Go to Settings → Model Providers
    • Click OpenAI-API-compatible
    • Configure:
      • Model Name: gpt-4o (or any model you want)
      • API Key: Your Crazyrouter API key
      • API Base URL: https://crazyrouter.com/v1
  3. Add multiple models — repeat for Claude, Gemini, Llama, or any of the 300+ models available on Crazyrouter

That's it. Now every model you've configured is available in Dify's workflow builder, chatbot settings, and agent configurations.

Why Use Crazyrouter with Dify?#

  • One API key, 300+ models: No need to sign up for OpenAI, Anthropic, Google, etc. separately
  • Cost savings: Crazyrouter offers competitive pricing, often lower than going direct
  • No region restrictions: Access models that may be geo-restricted in your country
  • Unified billing: One bill instead of managing multiple provider accounts
  • Automatic failover: If one provider is down, Crazyrouter can route to alternatives

Building a Chatbot with Dify + Crazyrouter#

Let's build a customer support chatbot that uses RAG to answer questions from your documentation.

Step 1: Create a Knowledge Base#

bash
# You can also upload via the Dify UI
curl -X POST 'http://localhost/v1/datasets' \
  -H 'Authorization: Bearer your-dify-api-key' \
  -H 'Content-Type: application/json' \
  -d '{
    "name": "Product Documentation",
    "description": "Company product docs for customer support"
  }'

Upload your documents through the Dify dashboard: Knowledge → Create Knowledge → Upload Files.

Step 2: Create the Chatbot App#

In the Dify dashboard:

  1. Click Create App → Chatbot
  2. Name it "Customer Support Bot"
  3. Under Model, select the model you configured with Crazyrouter (e.g., gpt-4o)
  4. Under Context, add your Knowledge Base
  5. Set the system prompt:
code
You are a helpful customer support assistant. Answer questions based on the provided documentation. If you don't know the answer, say so honestly and suggest contacting human support.

Step 3: Configure and Deploy#

Set conversation parameters:

  • Temperature: 0.3 (lower for more factual responses)
  • Max tokens: 1024
  • Top-K retrieval: 5 documents

Deploy options:

  • Embed on website: Copy the iframe or JavaScript snippet
  • API access: Use Dify's API to integrate into your own application
  • Share link: Generate a public URL for the chatbot

Step 4: Access via API#

python
import requests

DIFY_API_KEY = "your-dify-app-api-key"
BASE_URL = "http://localhost/v1"

response = requests.post(
    f"{BASE_URL}/chat-messages",
    headers={
        "Authorization": f"Bearer {DIFY_API_KEY}",
        "Content-Type": "application/json"
    },
    json={
        "inputs": {},
        "query": "How do I reset my password?",
        "response_mode": "blocking",
        "user": "user-123"
    }
)

result = response.json()
print(result["answer"])
javascript
// Node.js
const axios = require('axios');

const response = await axios.post(
  'http://localhost/v1/chat-messages',
  {
    inputs: {},
    query: 'How do I reset my password?',
    response_mode: 'blocking',
    user: 'user-123'
  },
  {
    headers: {
      'Authorization': 'Bearer your-dify-app-api-key',
      'Content-Type': 'application/json'
    }
  }
);

console.log(response.data.answer);
bash
# cURL
curl -X POST 'http://localhost/v1/chat-messages' \
  -H 'Authorization: Bearer your-dify-app-api-key' \
  -H 'Content-Type: application/json' \
  -d '{
    "inputs": {},
    "query": "How do I reset my password?",
    "response_mode": "blocking",
    "user": "user-123"
  }'

Dify Pricing: Cloud vs Self-Hosted#

Dify Cloud Pricing#

PlanPriceMessage CreditsTeam MembersAppsKnowledge Storage
SandboxFree200 total1550MB
Professional$59/mo5,000/mo3505GB
Team$159/mo10,000/mo5020020GB
EnterpriseCustomCustomUnlimitedUnlimitedCustom

Note: Message credits are for Dify's built-in model usage. When you bring your own API key (like from Crazyrouter), you pay the model provider directly and Dify credits aren't consumed for LLM calls.

Self-Hosted (Free)#

Self-hosting Dify is completely free with the open-source Community Edition:

  • Cost: $0 (you pay for server hosting + AI model API calls)
  • All features included: Workflows, RAG, agents, everything
  • No message credit limits
  • Unlimited team members
  • Unlimited apps

The main cost is your server (55–50/month for a VPS) plus AI model API usage. Using Crazyrouter for model access keeps API costs low with pay-as-you-go pricing.

Cost Comparison Example#

For a team running 10,000 AI conversations per month:

SetupMonthly Cost
Dify Cloud Team$159 + model API costs
Dify Self-Hosted + OpenAI Direct~20server+ 20 server + ~150 API
Dify Self-Hosted + Crazyrouter~20server+ 20 server + ~100 API

Self-hosting with Crazyrouter typically saves 30–50% compared to Dify Cloud, especially at scale.

FAQ#

Is Dify free to use?#

Yes, Dify is open-source and free to self-host. The Community Edition includes all core features — workflows, RAG, agents, and API access — with no usage limits. Dify also offers a free Sandbox tier on their cloud platform with 200 message credits.

What's the difference between Dify and LangChain?#

Dify is a visual platform for building LLM apps with a UI, while LangChain is a Python/JavaScript framework for developers who prefer writing code. Dify is better for rapid prototyping and non-technical users; LangChain offers more flexibility for complex custom implementations. Many teams use both — prototyping in Dify and building production systems with LangChain.

Can I connect Dify to any AI model?#

Yes. Dify supports all major providers natively (OpenAI, Anthropic, Google, etc.) and any OpenAI-compatible API endpoint. Using Crazyrouter as your API provider gives you access to 300+ models through a single configuration.

How do I deploy a Dify chatbot on my website?#

Dify provides multiple deployment options: an embeddable iframe, a JavaScript widget, a shareable URL, and a full REST API. After creating your app, go to Publish → Embed in Website and copy the code snippet into your HTML.

Is Dify suitable for production use?#

Yes. Dify is used in production by thousands of companies. The self-hosted version gives you full control over data, scaling, and security. For high-availability setups, you can run multiple API server instances behind a load balancer with shared PostgreSQL and Redis.

How does Dify handle data privacy?#

When self-hosted, all data stays on your infrastructure — Dify doesn't send any data to external servers (except to the AI model providers you configure). This makes it suitable for enterprises with strict data privacy requirements. You control the vector database, conversation logs, and uploaded documents.

Can I use Dify with local/open-source models?#

Yes. Dify supports Ollama, LocalAI, and any OpenAI-compatible local inference server. You can run Llama 3, Mistral, or other open-source models locally and connect them to Dify without any cloud API costs.

Summary#

Dify is the fastest way to go from idea to working AI application in 2026. The visual workflow builder, built-in RAG pipeline, and agent framework handle the heavy lifting, while the open-source model means you're never locked in.

For the best experience, pair self-hosted Dify with Crazyrouter — one API key gives you access to GPT-4o, Claude, Gemini, Llama, and 300+ other models without managing multiple provider accounts. It's the most cost-effective way to power your Dify applications.

Get started: Self-host Dify | Get a Crazyrouter API key

Related Articles