Quick Start
Using Pre-built Image
Pull and run the latest ChartDB image from GitHub Container Registry:http://localhost:8080.
With AI Capabilities
To enable AI features, provide your OpenAI API key:Replace
sk-your-api-key-here with your actual OpenAI API key. AI features include DDL script generation for database migrations and other intelligent operations.Building Custom Images
Basic Build
Build ChartDB from source:Build with AI Support
Build with OpenAI API key baked into the image:Build with Custom Inference Server
ChartDB supports custom LLM inference servers compatible with the OpenAI API format (like vLLM, LocalAI, or Ollama):You must configure either Option 1 (OpenAI API key) OR Option 2 (Custom endpoint and model name) for AI capabilities to work. Do not mix the two options.
Example: Local vLLM Server
Here’s a complete example using a local vLLM server:Advanced Configuration
Custom Port Mapping
By default, ChartDB listens on port 80 inside the container. Map it to any port on your host:Multiple Build Arguments
Combine multiple configuration options:Runtime Environment Variables
Pass all environment variables at runtime:Docker Architecture
ChartDB uses a multi-stage Docker build:Build Stage
Uses
node:24-alpine to build the Vite application with all dependencies and environment variables- Small image size: Only production files are included
- Fast startup: Nginx serves pre-built static assets
- Dynamic configuration: Environment variables can be changed without rebuilding
- Production-ready: Optimized for performance and security
Environment Variables
The following environment variables can be set at both build time (with--build-arg) and runtime (with -e):
| Variable | Description | Example |
|---|---|---|
OPENAI_API_KEY | OpenAI API key for AI features | sk-proj-... |
OPENAI_API_ENDPOINT | Custom LLM endpoint URL | http://localhost:8000/v1 |
LLM_MODEL_NAME | Custom LLM model name | Qwen/Qwen2.5-32B-Instruct-AWQ |
HIDE_CHARTDB_CLOUD | Hide ChartDB Cloud references | true |
DISABLE_ANALYTICS | Disable Fathom Analytics | true |
Build-time variables are prefixed with
VITE_ (e.g., VITE_OPENAI_API_KEY), while runtime variables use the same name without the prefix.Docker Compose
For easier management, use Docker Compose:docker-compose.yml
Updating
Pull Latest Image
With Docker Compose
Troubleshooting
Container starts but page doesn't load
Container starts but page doesn't load
Check if port 8080 is already in use:
AI features not working
AI features not working
Verify your environment variables:Ensure you’ve configured either
OPENAI_API_KEY or both OPENAI_API_ENDPOINT and LLM_MODEL_NAME.Cannot connect to custom inference server
Cannot connect to custom inference server
If your inference server is running on
localhost, use Docker networking:Permission denied when building
Permission denied when building
Ensure the entrypoint script has execute permissions:
Next Steps
Configuration
Learn about all available configuration options
AI Setup
Configure AI features and custom LLM providers
