Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 17 additions & 0 deletions services/open-webui/.env.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# Tailscale
TS_AUTHKEY=tskey-auth-your-key-here
TS_HOSTNAME=open-webui

# Open WebUI
# Point to your Ollama instance - can be local or remote
# Examples:
# Local Ollama on same host: http://host.docker.internal:11434
# Remote Ollama on LAN: http://192.168.1.x:11434
# Remote Ollama over Tailnet: http://100.x.x.x:11434
# Use OpenAI API instead: leave blank and configure in the UI
OLLAMA_BASE_URL=http://host.docker.internal:11434
WEBUI_SECRET_KEY=change-me-to-a-random-secret
TZ=America/New_York

# Optional: uncomment port in compose.yaml to expose on LAN
SERVICEPORT=8080
46 changes: 46 additions & 0 deletions services/open-webui/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
# Open WebUI with Tailscale Sidecar Configuration

This Docker Compose configuration sets up [Open WebUI](https://openwebui.com/) with Tailscale as a sidecar container to keep the app reachable over your Tailnet.

## Open WebUI

[Open WebUI](https://openwebui.com/) is a feature-rich, self-hosted AI platform that provides a ChatGPT-style interface for interacting with local and cloud-based AI models. It supports Ollama and any OpenAI-compatible API. Pairing it with Tailscale means your private AI interface is securely accessible from any of your devices — phone, laptop, or otherwise — without exposing it to the public internet.

## Configuration Overview

In this setup, the `tailscale-open-webui` service runs Tailscale, which manages secure networking for Open WebUI. The `app-open-webui` service utilizes the Tailscale network stack via Docker's `network_mode: service:` configuration. This keeps the app Tailnet-only unless you intentionally expose ports.

## Prerequisites

- A Tailscale account with an auth key ([generate one here](https://login.tailscale.com/admin/settings/keys))
- MagicDNS and HTTPS enabled in your [Tailscale admin console](https://login.tailscale.com/admin/dns)
- Docker and Docker Compose installed
- An AI backend — Ollama running locally, on another machine, or an OpenAI-compatible API

## Setup

1. Copy `.env.example` to `.env` and fill in your values
2. Set `OLLAMA_BASE_URL` to point at your Ollama instance (see `.env.example` for examples), or leave it blank and configure a different API provider in the Open WebUI settings after first launch
3. Copy `serve.json` into `ts/config/serve.json` — it is mounted into the Tailscale container
4. Pre-create the data directory to avoid Docker creating it as root-owned: `mkdir -p ./data`
5. Run `docker compose config` to validate before deploying
6. Start the stack: `docker compose up -d`
7. On first launch, navigate to `https://<TS_HOSTNAME>.<tailnet>.ts.net` and create your admin account — the server is open until the first user registers

## Gotchas

- **First-run security**: Create your admin account immediately after deployment
- **WebSocket support**: Open WebUI requires WebSocket connections — ensure nothing in your network path blocks them
- **Ollama on the same host**: Use `host.docker.internal:11434` as the `OLLAMA_BASE_URL` to reach Ollama running on the Docker host
- **Ollama over Tailnet**: If Ollama runs on a different machine, use its Tailscale IP (e.g. `http://100.x.x.x:11434`)
- **No Ollama**: Leave `OLLAMA_BASE_URL` blank and configure OpenAI or another provider in the UI after first launch
- **Health check**: The compose uses `tailscale status` for the health check. The `41234/healthz` endpoint is not available in userspace mode (`TS_USERSPACE=true`)
- **MagicDNS**: `TS_CERT_DOMAIN` in `serve.json` is populated automatically by Tailscale at runtime — you do not set it manually
- **LAN access**: Ports are commented out by default. Uncomment `SERVICEPORT` in `compose.yaml` if you also want LAN access alongside Tailnet access

## Resources

- [Open WebUI Documentation](https://docs.openwebui.com/)
- [Open WebUI GitHub](https://github.com/open-webui/open-webui)
- [Tailscale Serve docs](https://tailscale.com/kb/1242/tailscale-serve)
- [Tailscale Docker guide](https://tailscale.com/blog/docker-tailscale-guide)
42 changes: 42 additions & 0 deletions services/open-webui/compose.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
services:
tailscale-open-webui:
image: tailscale/tailscale:latest
container_name: tailscale-open-webui
hostname: ${TS_HOSTNAME}
restart: unless-stopped
environment:
- TS_AUTHKEY=${TS_AUTHKEY}
- TS_STATE_DIR=/var/lib/tailscale
- TS_SERVE_CONFIG=/config/serve.json
- TS_USERSPACE=true
- TS_EXTRA_ARGS=--advertise-tags=tag:container
volumes:
- ./ts/state:/var/lib/tailscale
- ./ts/config:/config
- /dev/net/tun:/dev/net/tun
cap_add:
- NET_ADMIN
- SYS_MODULE
healthcheck:
test: ["CMD", "tailscale", "status"]
interval: 1m
timeout: 10s
retries: 3
start_period: 10s

app-open-webui:
image: ghcr.io/open-webui/open-webui:main
container_name: app-open-webui
restart: unless-stopped
depends_on:
tailscale-open-webui:
condition: service_healthy
network_mode: service:tailscale-open-webui
environment:
- OLLAMA_BASE_URL=${OLLAMA_BASE_URL}
- WEBUI_SECRET_KEY=${WEBUI_SECRET_KEY}
- TZ=${TZ}
volumes:
- ./data:/app/backend/data
# ports:
# - 0.0.0.0:${SERVICEPORT}:${SERVICEPORT} # Uncomment to expose on LAN in addition to Tailnet
19 changes: 19 additions & 0 deletions services/open-webui/ts/config/serve.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
{
"TCP": {
"443": {
"HTTPS": true
}
},
"Web": {
"${TS_CERT_DOMAIN}:443": {
"Handlers": {
"/": {
"Proxy": "http://127.0.0.1:8080"
}
}
}
},
"AllowFunnel": {
"${TS_CERT_DOMAIN}:443": false
}
}