🖥️ Desktop App coming soon! Native apps for macOS, Windows & Linux in development Star on GitHub
Open Source & Self-Hosted

Your AI, Your Rules

v0.2.3 What's new?
Release Notes View all →
Loading...

Privacy-first AI chat interface. Run locally with Ollama or connect to OpenAI, Anthropic, and 9+ providers. Zero telemetry. Zero tracking.

npx libre-webui

Requires Node.js 18+ and Ollama for local AI

Libre WebUI Interface
🔒 Zero Telemetry
🏠 Self-Hosted
📜 Apache 2.0
🔌 Plugin System

Everything You Need

A complete AI chat solution that respects your privacy

🤖

Local & Cloud AI

Run models locally with Ollama or connect to OpenAI, Anthropic, Groq, Gemini, Mistral, and more. Your choice.

📄

Document Chat (RAG)

Upload PDFs, docs, and text files. Ask questions about your documents with semantic search and vector embeddings.

🎨

Interactive Artifacts

Render HTML, SVG, and React components directly in chat. Live preview with full-screen mode.

🔐

AES-256 Encryption

Enterprise-grade encryption for all your data. Chat history, documents, and settings are encrypted at rest.

🎭

Custom Personas

Create AI personalities with unique behaviors and system prompts. Import/export personas as JSON.

🔊

Text-to-Speech

Listen to AI responses with multiple voice options. Supports browser TTS and ElevenLabs integration.

⌨️

Keyboard Shortcuts

VS Code-inspired shortcuts for power users. Navigate, toggle settings, and control everything from the keyboard.

👥

Multi-User Support

Role-based access control with SSO support. GitHub and Hugging Face OAuth built-in.

Connect to Any Provider

One interface, unlimited possibilities

Ollama
Local models
OpenAI
GPT-4o, o1, o3
Anthropic
Claude 4, Opus
Groq
Llama, Mixtral
Google
Gemini Pro
Mistral
Mistral Large
OpenRouter
400+ models
+ Custom
Any OpenAI-compatible API

Get Started in Seconds

Choose your preferred installation method

Recommended

npx (One Command)

npx libre-webui

Runs instantly. No installation required.

npm (Global Install)

npm install -g libre-webui libre-webui

Install once, run anywhere.

Docker

docker run -p 8080:8080 libre-webui/libre-webui

Containerized deployment.

Create Custom Plugins

Connect any OpenAI-compatible LLM with a simple JSON file

Available Plugins

Official plugins from the Libre WebUI repository. Click to view or download.

Loading plugins from GitHub...
📄 custom-model.json
{
  "id": "custom-model",
  "name": "Custom Model",
  "type": "completion",
  "endpoint": "http://localhost:8000/v1/chat/completions",
  "auth": {
    "header": "Authorization",
    "prefix": "Bearer ",
    "key_env": "CUSTOM_MODEL_API_KEY"
  },
  "model_map": [
    "my-fine-tuned-llama"
  ]
}

Create Your Own Plugin

1

Start Your LLM Server

Run any OpenAI-compatible server: llama.cpp, vLLM, Ollama, or a custom FastAPI server.

2

Create Plugin JSON

Define your endpoint, authentication, and available models in a simple JSON file.

3

Upload to Libre WebUI

Go to Settings > Providers, upload your plugin, and enter your API key.

4

Start Chatting

Your custom models appear in the model selector. Full privacy, full control.

Plugin Fields Reference

id Unique identifier (lowercase, hyphens allowed)
name Display name shown in the UI
type "completion" for chat, "tts" for text-to-speech
endpoint API URL (e.g., /v1/chat/completions)
auth.header Auth header name (Authorization, x-api-key)
auth.prefix Key prefix ("Bearer " or empty)
auth.key_env Environment variable for your API key
model_map Array of available model identifiers

Ready to Own Your AI?

Join thousands of users who value privacy and control.