mirror of
https://github.com/altstackHQ/altstack-data.git
synced 2026-04-19 11:53:24 +02:00
Initialize public data and docs repository
This commit is contained in:
33
docs/app/concepts/_meta.ts
Normal file
33
docs/app/concepts/_meta.ts
Normal file
@@ -0,0 +1,33 @@
|
||||
import type { MetaRecord } from 'nextra'
|
||||
|
||||
const meta: MetaRecord = {
|
||||
'docker-basics': {
|
||||
title: 'Docker in 10 Minutes',
|
||||
},
|
||||
networking: {
|
||||
title: 'Networking for Self-Hosters',
|
||||
},
|
||||
'reverse-proxies': {
|
||||
title: 'Reverse Proxies Explained',
|
||||
},
|
||||
'ssl-tls': {
|
||||
title: 'SSL/TLS for Self-Hosters',
|
||||
},
|
||||
'env-secrets': {
|
||||
title: 'Environment Variables & Secrets',
|
||||
},
|
||||
monitoring: {
|
||||
title: 'Monitoring & Observability',
|
||||
},
|
||||
updates: {
|
||||
title: 'Updating & Maintaining Containers',
|
||||
},
|
||||
backups: {
|
||||
title: 'Backups That Actually Work',
|
||||
},
|
||||
hardware: {
|
||||
title: 'Hardware & VPS Sizing',
|
||||
},
|
||||
}
|
||||
|
||||
export default meta
|
||||
103
docs/app/concepts/backups/page.mdx
Normal file
103
docs/app/concepts/backups/page.mdx
Normal file
@@ -0,0 +1,103 @@
|
||||
---
|
||||
title: Backups That Actually Work
|
||||
description: "How to back up your self-hosted tools. Docker volumes, database dumps, and automated backup scripts that run while you sleep."
|
||||
---
|
||||
|
||||
# Backups That Actually Work
|
||||
|
||||
Self-hosting means *you're* responsible for your data. No "Contact Support to restore from backup." **You are the support.**
|
||||
|
||||
The good news: backing up Docker-based tools is simple once you set up a system.
|
||||
|
||||
## What to Back Up
|
||||
|
||||
| Component | Where It Lives | How to Back Up |
|
||||
|---|---|---|
|
||||
| **Docker volumes** | `/var/lib/docker/volumes/` | Volume export or rsync |
|
||||
| **Databases (Postgres)** | Inside a Docker container | `pg_dump` |
|
||||
| **Config files** | Your `docker-compose.yml` and `.env` | Git or file copy |
|
||||
|
||||
> ⚠️ **Heads Up:** `docker-compose.yml` files are easy to recreate. Database data is not. Prioritize database backups above everything else.
|
||||
|
||||
## Method 1: Database Dumps (Essential)
|
||||
|
||||
Most self-hosted tools use PostgreSQL. Here's how to dump it:
|
||||
|
||||
```bash
|
||||
# Dump a Postgres database running in a container
|
||||
docker exec your-db-container \
|
||||
pg_dump -U postgres your_database > backup_$(date +%Y%m%d).sql
|
||||
```
|
||||
|
||||
To restore:
|
||||
|
||||
```bash
|
||||
cat backup_20260218.sql | docker exec -i your-db-container \
|
||||
psql -U postgres your_database
|
||||
```
|
||||
|
||||
## Method 2: Volume Backup
|
||||
|
||||
For tools that store data in Docker volumes:
|
||||
|
||||
```bash
|
||||
# Find your volumes
|
||||
docker volume ls
|
||||
|
||||
# Backup a volume to a tar file
|
||||
docker run --rm \
|
||||
-v my_volume:/data \
|
||||
-v $(pwd)/backups:/backup \
|
||||
alpine tar czf /backup/my_volume_backup.tar.gz /data
|
||||
```
|
||||
|
||||
## Method 3: Automated Script
|
||||
|
||||
Create a backup script that runs daily via cron:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# /opt/backup.sh
|
||||
|
||||
BACKUP_DIR="/opt/backups"
|
||||
DATE=$(date +%Y%m%d_%H%M)
|
||||
mkdir -p $BACKUP_DIR
|
||||
|
||||
# Dump Postgres databases
|
||||
docker exec supabase-db pg_dump -U postgres postgres > $BACKUP_DIR/supabase_$DATE.sql
|
||||
docker exec plausible_db pg_dump -U postgres plausible_db > $BACKUP_DIR/plausible_$DATE.sql
|
||||
|
||||
# Clean backups older than 7 days
|
||||
find $BACKUP_DIR -name "*.sql" -mtime +7 -delete
|
||||
|
||||
echo "Backup complete: $DATE"
|
||||
```
|
||||
|
||||
Add to cron:
|
||||
|
||||
```bash
|
||||
# Run at 3 AM every day
|
||||
crontab -e
|
||||
# Add this line:
|
||||
0 3 * * * /opt/backup.sh >> /var/log/backup.log 2>&1
|
||||
```
|
||||
|
||||
## The 3-2-1 Rule
|
||||
|
||||
For serious setups, follow the **3-2-1 backup rule**:
|
||||
|
||||
- **3** copies of your data
|
||||
- **2** different storage types (local + remote)
|
||||
- **1** offsite copy (rsync to another server, or upload to B2/S3)
|
||||
|
||||
```bash
|
||||
# Sync backups to a remote server
|
||||
rsync -avz /opt/backups/ user@backup-server:/backups/
|
||||
```
|
||||
|
||||
## Next Steps
|
||||
|
||||
You now have the four foundational concepts: Docker, reverse proxies, SSL, and backups. Time to build:
|
||||
|
||||
→ [Deploy Guides](/deploy) — 65+ tools ready to deploy
|
||||
→ [The Bootstrapper Stack](/stacks/bootstrapper) — A complete SaaS toolkit
|
||||
127
docs/app/concepts/docker-basics/page.mdx
Normal file
127
docs/app/concepts/docker-basics/page.mdx
Normal file
@@ -0,0 +1,127 @@
|
||||
---
|
||||
title: Understanding Docker in 10 Minutes
|
||||
description: "Docker explained for self-hosters. No CS degree required. Containers, images, volumes, and Docker Compose — the only concepts you actually need."
|
||||
---
|
||||
|
||||
# Understanding Docker in 10 Minutes
|
||||
|
||||
Docker is the reason self-hosting went from "sysadmin hobby" to "anyone can do it." It packages software into neat, isolated containers that run the same everywhere.
|
||||
|
||||
You don't need to become a Docker expert. You need to understand **four concepts**.
|
||||
|
||||
## Concept 1: Images
|
||||
|
||||
An **image** is a snapshot of software — pre-built, pre-configured, ready to run. Think of it like an `.iso` file, but for apps.
|
||||
|
||||
```bash
|
||||
# Download the Plausible Analytics image
|
||||
docker pull plausible/analytics:latest
|
||||
```
|
||||
|
||||
Images live on [Docker Hub](https://hub.docker.com) — a public registry of 100,000+ images. When our deploy guides say `image: plausible/analytics:latest`, they're pulling from here.
|
||||
|
||||
## Concept 2: Containers
|
||||
|
||||
A **container** is a running instance of an image. Image = blueprint. Container = the actual building.
|
||||
|
||||
```bash
|
||||
# Start a container from an image
|
||||
docker run -d --name my-plausible plausible/analytics:latest
|
||||
|
||||
# See running containers
|
||||
docker ps
|
||||
|
||||
# Stop a container
|
||||
docker stop my-plausible
|
||||
|
||||
# Remove a container (data in volumes is safe)
|
||||
docker rm my-plausible
|
||||
```
|
||||
|
||||
> 💡 **Why?** Containers are isolated from each other and from your host system. Breaking one container doesn't break anything else.
|
||||
|
||||
## Concept 3: Volumes
|
||||
|
||||
**Volumes** store your data *outside* the container. This is critical because containers are disposable — when you update an image, you destroy the old container and create a new one. Volumes survive this process.
|
||||
|
||||
```bash
|
||||
# Mount a volume called "plausible-data"
|
||||
docker run -v plausible-data:/var/lib/clickhouse plausible/analytics
|
||||
```
|
||||
|
||||
Without volumes, your data dies when the container dies. **Always use volumes.**
|
||||
|
||||
```bash
|
||||
# List all volumes
|
||||
docker volume ls
|
||||
|
||||
# Backup a volume (copy to local tar)
|
||||
docker run --rm -v plausible-data:/data -v $(pwd):/backup alpine \
|
||||
tar czf /backup/plausible-backup.tar.gz /data
|
||||
```
|
||||
|
||||
## Concept 4: Docker Compose
|
||||
|
||||
This is the big one. **Docker Compose** lets you define multi-container setups in a single YAML file. Most real-world tools need multiple containers (app + database + cache), and Docker Compose handles that.
|
||||
|
||||
```yaml
|
||||
# docker-compose.yml
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
app:
|
||||
image: plausible/analytics:latest
|
||||
ports:
|
||||
- "8000:8000"
|
||||
depends_on:
|
||||
- db
|
||||
|
||||
db:
|
||||
image: postgres:14-alpine
|
||||
volumes:
|
||||
- db_data:/var/lib/postgresql/data
|
||||
environment:
|
||||
POSTGRES_PASSWORD: supersecret
|
||||
|
||||
volumes:
|
||||
db_data:
|
||||
```
|
||||
|
||||
Then run it:
|
||||
|
||||
```bash
|
||||
# Start everything
|
||||
docker compose up -d
|
||||
|
||||
# See logs
|
||||
docker compose logs -f
|
||||
|
||||
# Stop everything
|
||||
docker compose down
|
||||
|
||||
# Update to latest images
|
||||
docker compose pull && docker compose up -d
|
||||
```
|
||||
|
||||
That's the pattern for **every single deploy guide** in these docs:
|
||||
1. Copy the `docker-compose.yml`
|
||||
2. Tweak the environment variables
|
||||
3. Run `docker compose up -d`
|
||||
4. Done.
|
||||
|
||||
## The 5 Commands You'll Actually Use
|
||||
|
||||
| Command | What it does |
|
||||
|---|---|
|
||||
| `docker compose up -d` | Start all services in the background |
|
||||
| `docker compose down` | Stop all services |
|
||||
| `docker compose logs -f` | Watch live logs (Ctrl+C to exit) |
|
||||
| `docker compose pull` | Download latest images |
|
||||
| `docker ps` | List running containers |
|
||||
|
||||
That's it. That's Docker for self-hosters.
|
||||
|
||||
## Next Steps
|
||||
|
||||
→ [Reverse Proxies Explained](/concepts/reverse-proxies) — How to access your tools via `app.yourdomain.com`
|
||||
→ [Your First Deployment](/quick-start/first-deployment) — Put this knowledge to use
|
||||
153
docs/app/concepts/env-secrets/page.mdx
Normal file
153
docs/app/concepts/env-secrets/page.mdx
Normal file
@@ -0,0 +1,153 @@
|
||||
---
|
||||
title: "Environment Variables & Secrets"
|
||||
description: "How to manage .env files, Docker secrets, and sensitive configuration for self-hosted tools. Stop hardcoding passwords."
|
||||
---
|
||||
|
||||
# Environment Variables & Secrets
|
||||
|
||||
Every self-hosted tool needs configuration: database passwords, API keys, admin emails. The **wrong** way is hardcoding them in `docker-compose.yml`. The **right** way is environment variables.
|
||||
|
||||
## The Basics: `.env` Files
|
||||
|
||||
Docker Compose automatically reads a `.env` file in the same directory as your `docker-compose.yml`:
|
||||
|
||||
```bash
|
||||
# .env
|
||||
POSTGRES_PASSWORD=super_secret_password_123
|
||||
ADMIN_EMAIL=you@yourdomain.com
|
||||
SECRET_KEY=a1b2c3d4e5f6g7h8i9j0
|
||||
```
|
||||
|
||||
```yaml
|
||||
# docker-compose.yml
|
||||
services:
|
||||
db:
|
||||
image: postgres:16
|
||||
environment:
|
||||
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
|
||||
```
|
||||
|
||||
Docker Compose substitutes `${POSTGRES_PASSWORD}` with the value from `.env`. Your secrets stay out of your Compose file.
|
||||
|
||||
> ⚠️ **Critical:** Add `.env` to your `.gitignore` immediately. Never commit secrets to Git.
|
||||
|
||||
```bash
|
||||
echo ".env" >> .gitignore
|
||||
```
|
||||
|
||||
## Generating Strong Passwords
|
||||
|
||||
Don't use `password123`. Generate proper secrets:
|
||||
|
||||
```bash
|
||||
# Generate a 32-character random string
|
||||
openssl rand -base64 32
|
||||
|
||||
# Generate a hex string (great for SECRET_KEY)
|
||||
openssl rand -hex 32
|
||||
|
||||
# Generate a URL-safe string
|
||||
python3 -c "import secrets; print(secrets.token_urlsafe(32))"
|
||||
```
|
||||
|
||||
### Template for Common Tools
|
||||
|
||||
Most self-hosted tools need similar variables. Here's a reusable `.env` template:
|
||||
|
||||
```bash
|
||||
# .env template — generate all values before first run
|
||||
|
||||
# Database
|
||||
POSTGRES_USER=app
|
||||
POSTGRES_PASSWORD= # openssl rand -base64 32
|
||||
POSTGRES_DB=app_db
|
||||
|
||||
# App
|
||||
SECRET_KEY= # openssl rand -hex 32
|
||||
ADMIN_EMAIL=you@yourdomain.com
|
||||
ADMIN_PASSWORD= # openssl rand -base64 24
|
||||
BASE_URL=https://app.yourdomain.com
|
||||
|
||||
# SMTP (for email notifications)
|
||||
SMTP_HOST=smtp.gmail.com
|
||||
SMTP_PORT=587
|
||||
SMTP_USER=you@gmail.com
|
||||
SMTP_PASSWORD= # Use app-specific password
|
||||
```
|
||||
|
||||
## Default Values (Fallbacks)
|
||||
|
||||
Use the `:-` syntax for non-sensitive defaults:
|
||||
|
||||
```yaml
|
||||
environment:
|
||||
NODE_ENV: ${NODE_ENV:-production} # Defaults to "production"
|
||||
LOG_LEVEL: ${LOG_LEVEL:-info} # Defaults to "info"
|
||||
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD} # No default — MUST be set
|
||||
```
|
||||
|
||||
## Docker Secrets (Advanced)
|
||||
|
||||
For production setups, Docker Secrets are more secure than environment variables — they're stored encrypted and mounted as files:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
db:
|
||||
image: postgres:16
|
||||
environment:
|
||||
POSTGRES_PASSWORD_FILE: /run/secrets/db_password
|
||||
secrets:
|
||||
- db_password
|
||||
|
||||
secrets:
|
||||
db_password:
|
||||
file: ./secrets/db_password.txt
|
||||
```
|
||||
|
||||
```bash
|
||||
# Create the secret file
|
||||
mkdir -p secrets
|
||||
openssl rand -base64 32 > secrets/db_password.txt
|
||||
chmod 600 secrets/db_password.txt
|
||||
```
|
||||
|
||||
> 💡 Not all images support `_FILE` suffix variables. Check the image's documentation on Docker Hub.
|
||||
|
||||
## Multiple Environments
|
||||
|
||||
Keep separate `.env` files for different environments:
|
||||
|
||||
```bash
|
||||
.env # Production (default)
|
||||
.env.local # Local development
|
||||
.env.staging # Staging server
|
||||
```
|
||||
|
||||
Use them explicitly:
|
||||
|
||||
```bash
|
||||
# Use a specific env file
|
||||
docker compose --env-file .env.staging up -d
|
||||
```
|
||||
|
||||
## Security Checklist
|
||||
|
||||
- [ ] `.env` is in `.gitignore`
|
||||
- [ ] No secrets are hardcoded in `docker-compose.yml`
|
||||
- [ ] All passwords are randomly generated (32+ characters)
|
||||
- [ ] Database ports are NOT exposed to the internet
|
||||
- [ ] Secret files have `chmod 600` permissions
|
||||
- [ ] Default passwords from docs have been changed
|
||||
|
||||
## Common Mistakes
|
||||
|
||||
**"Variable is empty in the container"** → Check for typos. Variable names are case-sensitive. `POSTGRES_password` ≠ `POSTGRES_PASSWORD`.
|
||||
|
||||
**"Changes to .env aren't applying"** → You need to recreate the container: `docker compose up -d --force-recreate`.
|
||||
|
||||
**"I committed my .env to Git"** → Even after removing it, it's in Git history. Rotate ALL secrets immediately and use `git filter-branch` or BFG Repo Cleaner.
|
||||
|
||||
## Next Steps
|
||||
|
||||
→ [Monitoring & Observability](/concepts/monitoring) — Know when things break
|
||||
→ [Docker in 10 Minutes](/concepts/docker-basics) — Review the fundamentals
|
||||
145
docs/app/concepts/hardware/page.mdx
Normal file
145
docs/app/concepts/hardware/page.mdx
Normal file
@@ -0,0 +1,145 @@
|
||||
---
|
||||
title: "Hardware & VPS Sizing"
|
||||
description: "How much RAM, CPU, and disk you actually need for self-hosting. VPS provider comparison and scaling strategies."
|
||||
---
|
||||
|
||||
# Hardware & VPS Sizing
|
||||
|
||||
The #1 question new self-hosters ask: **"What server do I need?"**
|
||||
|
||||
Short answer: less than you think to start, more than you think once you're hooked.
|
||||
|
||||
## Quick Sizing Guide
|
||||
|
||||
### How Much RAM Do I Need?
|
||||
|
||||
| Setup | RAM | What You Can Run |
|
||||
|---|---|---|
|
||||
| **Starter** | 2 GB | 1–2 lightweight tools (Uptime Kuma, Plausible) |
|
||||
| **Hobbyist** | 4 GB | 3–5 tools + a database + reverse proxy |
|
||||
| **Power User** | 8 GB | 8–12 tools + multiple databases |
|
||||
| **Homelab** | 16 GB | Everything + AI models (small ones) |
|
||||
| **AI Workloads** | 32+ GB | LLMs, image generation, video AI |
|
||||
|
||||
> 💡 **Start with 4 GB.** You can always upgrade. Most VPS providers let you resize without downtime.
|
||||
|
||||
### CPU Guidelines
|
||||
|
||||
| Workload | vCPUs Needed |
|
||||
|---|---|
|
||||
| Static tools (Uptime Kuma, PocketBase) | 1 vCPU |
|
||||
| Web apps (Plausible, Outline, n8n) | 2 vCPUs |
|
||||
| Heavy apps (PostHog, Supabase, Metabase) | 4 vCPUs |
|
||||
| AI inference (Ollama, Stable Diffusion) | 4+ vCPUs + GPU |
|
||||
|
||||
### Disk Space
|
||||
|
||||
| Component | Typical Usage |
|
||||
|---|---|
|
||||
| Base OS + Docker | 5–8 GB |
|
||||
| Each Docker image | 100 MB – 2 GB |
|
||||
| PostgreSQL database (small app) | 500 MB – 5 GB |
|
||||
| Log files (unmanaged) | 1–10 GB |
|
||||
| AI models (per model) | 4–70 GB |
|
||||
|
||||
**Minimum recommended:** 50 GB SSD.
|
||||
**Comfortable:** 80–160 GB SSD.
|
||||
**AI workloads:** 200+ GB NVMe.
|
||||
|
||||
## VPS Provider Comparison
|
||||
|
||||
| Provider | Starting At | Pros | Best For |
|
||||
|---|---|---|---|
|
||||
| [**DigitalOcean**](https://m.do.co/c/2ed27757a361) | $6/mo (1 GB) | Simple UI, great docs, predictable pricing | Beginners |
|
||||
| **Hetzner** | €3.79/mo (2 GB) | Best price-to-performance in EU | Power users, EU hosting |
|
||||
| **Contabo** | €5.99/mo (4 GB) | Cheapest for RAM-heavy setups | Budget homelab |
|
||||
| **Linode (Akamai)** | $5/mo (1 GB) | Reliable, good network | Small projects |
|
||||
| **Vultr** | $5/mo (1 GB) | Global locations, hourly billing | Testing and experimentation |
|
||||
| **Oracle Cloud** | Free (4 vCPUs, 24 GB ARM) | Unbeatable free tier | Zero-budget hosting |
|
||||
| **Home Server** | One-time cost | Full control, unlimited bandwidth | Privacy maximalists |
|
||||
|
||||
> 🏆 **Our Pick:** [DigitalOcean](https://m.do.co/c/2ed27757a361) for beginners (simple, reliable, [$200 free credit](https://m.do.co/c/2ed27757a361)). **Hetzner** for best value. **Oracle Cloud free tier** if you want to pay nothing.
|
||||
|
||||
## Real-World Stack Sizing
|
||||
|
||||
Here's what actual AltStack setups typically need:
|
||||
|
||||
### The Bootstrapper Stack (4 GB RAM)
|
||||
- Coolify (deployment platform)
|
||||
- Plausible (analytics)
|
||||
- Uptime Kuma (monitoring)
|
||||
- Listmonk (newsletters)
|
||||
- Caddy (reverse proxy)
|
||||
|
||||
### The Privacy Stack (4 GB RAM)
|
||||
- Vaultwarden (passwords)
|
||||
- Jitsi Meet (video calls)
|
||||
- Mattermost (messaging)
|
||||
- Caddy (reverse proxy)
|
||||
|
||||
### The AI Stack (16–32 GB RAM)
|
||||
- Ollama (LLM inference)
|
||||
- Stable Diffusion (image generation)
|
||||
- TabbyML (code completion)
|
||||
- Continue.dev (AI coding)
|
||||
|
||||
## Scaling Strategies
|
||||
|
||||
### Vertical Scaling (Bigger Server)
|
||||
|
||||
The simplest approach. Just resize your VPS:
|
||||
|
||||
- **DigitalOcean:** Resize droplet (takes ~1 minute)
|
||||
- **Hetzner:** Rescale server (may require reboot)
|
||||
- **Home server:** Add RAM sticks
|
||||
|
||||
### Horizontal Scaling (More Servers)
|
||||
|
||||
When one server isn't enough:
|
||||
|
||||
```
|
||||
Server 1: Databases (Postgres, Redis)
|
||||
Server 2: Application containers
|
||||
Server 3: AI workloads (GPU)
|
||||
```
|
||||
|
||||
Connect them with a private network (most VPS providers offer this for free) or a VPN like WireGuard.
|
||||
|
||||
### The "Start Small" Strategy
|
||||
|
||||
1. **Month 1:** $6/mo droplet (1 GB) — Deploy 1–2 tools
|
||||
2. **Month 3:** Resize to $12/mo (2 GB) — Add more tools
|
||||
3. **Month 6:** Resize to $24/mo (4 GB) — Running your full stack
|
||||
4. **Month 12+:** Add a second server or move to Hetzner for better value
|
||||
|
||||
## Monitoring Your Resources
|
||||
|
||||
Always know how much headroom you have:
|
||||
|
||||
```bash
|
||||
# Quick resource check
|
||||
free -h # RAM usage
|
||||
df -h # Disk usage
|
||||
nproc # CPU cores
|
||||
uptime # Load average
|
||||
|
||||
# Docker resource usage
|
||||
docker stats # Live container metrics
|
||||
docker system df # Docker disk usage
|
||||
```
|
||||
|
||||
## Red Flags
|
||||
|
||||
🚩 **RAM constantly above 90%** → Resize or move a service to another server.
|
||||
|
||||
🚩 **Disk above 80%** → Clean Docker images (`docker system prune -f`) or resize disk.
|
||||
|
||||
🚩 **CPU at 100% for extended periods** → Check which container is the culprit with `docker stats`.
|
||||
|
||||
🚩 **Swap usage above 1 GB** → You need more RAM. Swap is a band-aid, not a solution.
|
||||
|
||||
## Next Steps
|
||||
|
||||
→ [Quick Start](/quick-start) — Deploy your first tool
|
||||
→ [Deploy Guides](/deploy) — Browse 65+ tools
|
||||
→ [Docker in 10 Minutes](/concepts/docker-basics) — Foundation knowledge
|
||||
163
docs/app/concepts/monitoring/page.mdx
Normal file
163
docs/app/concepts/monitoring/page.mdx
Normal file
@@ -0,0 +1,163 @@
|
||||
---
|
||||
title: "Monitoring & Observability"
|
||||
description: "Know when things break before your users do. Uptime monitoring, disk alerts, log aggregation, and observability for self-hosters."
|
||||
---
|
||||
|
||||
# Monitoring & Observability
|
||||
|
||||
You deployed 5 tools. They're running great. You go to bed. At 3 AM, the disk fills up, Postgres crashes, and everything dies. You find out at 9 AM when a user emails you.
|
||||
|
||||
**Monitoring prevents this.**
|
||||
|
||||
## The Three Layers
|
||||
|
||||
| Layer | What It Watches | Tool |
|
||||
|---|---|---|
|
||||
| **Uptime** | "Is the service responding?" | Uptime Kuma |
|
||||
| **System** | CPU, RAM, disk, network | Node Exporter + Grafana |
|
||||
| **Logs** | What's actually happening inside | Docker logs, Dozzle, SigNoz |
|
||||
|
||||
You need **at least** the first layer. The other two are for when you get serious.
|
||||
|
||||
## Layer 1: Uptime Monitoring (Essential)
|
||||
|
||||
[Uptime Kuma](/deploy/uptime-kuma) is the single best tool for self-hosters. Deploy it first, always.
|
||||
|
||||
```yaml
|
||||
# docker-compose.yml
|
||||
services:
|
||||
uptime-kuma:
|
||||
image: louislam/uptime-kuma:1
|
||||
container_name: uptime-kuma
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "3001:3001"
|
||||
volumes:
|
||||
- uptime_data:/app/data
|
||||
|
||||
volumes:
|
||||
uptime_data:
|
||||
```
|
||||
|
||||
### What to Monitor
|
||||
|
||||
Add a monitor for **every** service you run:
|
||||
|
||||
| Type | Target | Check Interval |
|
||||
|---|---|---|
|
||||
| HTTP(s) | `https://plausible.yourdomain.com` | 60s |
|
||||
| HTTP(s) | `https://uptime.yourdomain.com` | 60s |
|
||||
| TCP Port | `localhost:5432` (Postgres) | 120s |
|
||||
| Docker Container | Container name | 60s |
|
||||
| DNS | `yourdomain.com` | 300s |
|
||||
|
||||
### Notifications
|
||||
|
||||
Uptime Kuma supports 90+ notification channels. Set up **at least two**:
|
||||
|
||||
- **Email** — For non-urgent alerts
|
||||
- **Telegram/Discord/Slack** — For instant mobile alerts
|
||||
|
||||
> 🔥 **Pro Tip:** Monitor your monitoring. Set up an external free ping service (like [UptimeRobot](https://uptimerobot.com)) to watch your Uptime Kuma instance.
|
||||
|
||||
## Layer 2: System Metrics
|
||||
|
||||
### Quick Disk Alert Script
|
||||
|
||||
The #1 cause of self-hosting outages is **running out of disk space**. This script sends an alert when disk usage exceeds 80%:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# /opt/scripts/disk-alert.sh
|
||||
|
||||
THRESHOLD=80
|
||||
USAGE=$(df / | tail -1 | awk '{print $5}' | sed 's/%//')
|
||||
|
||||
if [ "$USAGE" -gt "$THRESHOLD" ]; then
|
||||
echo "⚠️ Disk usage is at ${USAGE}% on $(hostname)" | \
|
||||
mail -s "Disk Alert: ${USAGE}%" you@yourdomain.com
|
||||
fi
|
||||
```
|
||||
|
||||
Add to cron:
|
||||
|
||||
```bash
|
||||
# Check every hour
|
||||
0 * * * * /opt/scripts/disk-alert.sh
|
||||
```
|
||||
|
||||
### What to Watch
|
||||
|
||||
| Metric | Warning Threshold | Critical Threshold |
|
||||
|---|---|---|
|
||||
| Disk usage | 70% | 85% |
|
||||
| RAM usage | 80% | 95% |
|
||||
| CPU sustained | 80% for 5 min | 95% for 5 min |
|
||||
| Container restarts | 3 in 1 hour | 10 in 1 hour |
|
||||
|
||||
### Docker Resource Monitoring
|
||||
|
||||
Quick commands to check what's eating your resources:
|
||||
|
||||
```bash
|
||||
# Live resource usage per container
|
||||
docker stats
|
||||
|
||||
# Show container sizes (disk)
|
||||
docker system df -v
|
||||
|
||||
# Find large volumes
|
||||
du -sh /var/lib/docker/volumes/*/
|
||||
```
|
||||
|
||||
## Layer 3: Log Aggregation
|
||||
|
||||
Docker captures all stdout/stderr from your containers. Use it:
|
||||
|
||||
```bash
|
||||
# Live logs for a service
|
||||
docker compose logs -f plausible
|
||||
|
||||
# Last 100 lines
|
||||
docker compose logs --tail=100 plausible
|
||||
|
||||
# Logs since a specific time
|
||||
docker compose logs --since="2h" plausible
|
||||
```
|
||||
|
||||
### Dozzle (Docker Log Viewer)
|
||||
|
||||
For a beautiful web-based log viewer:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
dozzle:
|
||||
image: amir20/dozzle:latest
|
||||
container_name: dozzle
|
||||
ports:
|
||||
- "8080:8080"
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
```
|
||||
|
||||
### For Serious Setups: SigNoz
|
||||
|
||||
If you need traces, metrics, **and** logs in one place, deploy [SigNoz](/deploy/signoz). It's an open-source Datadog alternative built on OpenTelemetry.
|
||||
|
||||
## Maintenance Routine
|
||||
|
||||
Set a weekly calendar reminder:
|
||||
|
||||
```
|
||||
☐ Check Uptime Kuma — all green?
|
||||
☐ Run `docker stats` — anything hogging resources?
|
||||
☐ Run `df -h` — disk space OK?
|
||||
☐ Run `docker system prune -f` — clean unused images
|
||||
☐ Check logs for any errors — `docker compose logs --since=168h | grep -i error`
|
||||
```
|
||||
|
||||
## Next Steps
|
||||
|
||||
→ [Updating & Maintaining Containers](/concepts/updates) — Keep your tools up to date safely
|
||||
→ [Backups That Actually Work](/concepts/backups) — Protect your data
|
||||
→ [Deploy Uptime Kuma](/deploy/uptime-kuma) — Set up monitoring now
|
||||
160
docs/app/concepts/networking/page.mdx
Normal file
160
docs/app/concepts/networking/page.mdx
Normal file
@@ -0,0 +1,160 @@
|
||||
---
|
||||
title: "Networking for Self-Hosters"
|
||||
description: "Ports, DNS, firewalls, and private networks — the networking basics every self-hoster needs to know."
|
||||
---
|
||||
|
||||
# Networking for Self-Hosters
|
||||
|
||||
You deployed a tool. It works on `localhost:3000`. You try to access it from your phone. Nothing. Welcome to networking.
|
||||
|
||||
This guide covers the **four things** standing between your server and the outside world.
|
||||
|
||||
## 1. Ports
|
||||
|
||||
Every network service listens on a **port** — a numbered door on your server. Some well-known ones:
|
||||
|
||||
| Port | Service |
|
||||
|---|---|
|
||||
| `22` | SSH |
|
||||
| `80` | HTTP |
|
||||
| `443` | HTTPS |
|
||||
| `5432` | PostgreSQL |
|
||||
| `3000–9000` | Where most self-hosted tools live |
|
||||
|
||||
When Docker maps `-p 8080:3000`, it's saying: "When traffic hits port 8080 on the host, send it to port 3000 inside the container."
|
||||
|
||||
```yaml
|
||||
# In docker-compose.yml
|
||||
ports:
|
||||
- "8080:3000" # host:container
|
||||
```
|
||||
|
||||
> ⚠️ **Never expose database ports** (5432, 3306, 27017) to the internet. Keep them internal to Docker networks.
|
||||
|
||||
## 2. DNS (Domain Name System)
|
||||
|
||||
DNS translates human-readable names to IP addresses:
|
||||
|
||||
```
|
||||
plausible.yourdomain.com → 203.0.113.42
|
||||
```
|
||||
|
||||
### Setting Up DNS Records
|
||||
|
||||
In your domain registrar (Cloudflare, Namecheap, etc.):
|
||||
|
||||
| Type | Name | Value | What it does |
|
||||
|---|---|---|---|
|
||||
| **A** | `@` | `203.0.113.42` | Points root domain to your server |
|
||||
| **A** | `plausible` | `203.0.113.42` | Points subdomain to your server |
|
||||
| **CNAME** | `www` | `yourdomain.com` | Aliases `www` to root |
|
||||
| **A** | `*` | `203.0.113.42` | Wildcard — catch-all for any subdomain |
|
||||
|
||||
> 💡 **Pro Tip:** A wildcard `*` A record + Caddy reverse proxy = unlimited subdomains with zero DNS management. Just add entries to your Caddyfile.
|
||||
|
||||
### DNS Propagation
|
||||
|
||||
After changing DNS records, it can take **5 minutes to 48 hours** to propagate globally. Use [dnschecker.org](https://dnschecker.org) to verify.
|
||||
|
||||
## 3. Firewalls (UFW)
|
||||
|
||||
A firewall controls which ports are open to the internet. On Ubuntu/Debian, use **UFW** (Uncomplicated Firewall):
|
||||
|
||||
```bash
|
||||
# Check current status
|
||||
ufw status
|
||||
|
||||
# Allow essential ports
|
||||
ufw allow 22/tcp # SSH — DON'T lock yourself out
|
||||
ufw allow 80/tcp # HTTP
|
||||
ufw allow 443/tcp # HTTPS
|
||||
|
||||
# Enable the firewall
|
||||
ufw enable
|
||||
|
||||
# Deny everything else by default
|
||||
ufw default deny incoming
|
||||
ufw default allow outgoing
|
||||
```
|
||||
|
||||
### The Golden Rule
|
||||
|
||||
Only open three ports to the internet: **22** (SSH), **80** (HTTP), **443** (HTTPS).
|
||||
|
||||
Your reverse proxy (Caddy/Nginx) handles port 80/443 and routes traffic internally to your containers. Individual tool ports (3000, 8080, etc.) should **never** be exposed publicly.
|
||||
|
||||
```
|
||||
Internet → Port 443 → Caddy → Internal Docker Network → Your Tools
|
||||
```
|
||||
|
||||
### Common Mistakes
|
||||
|
||||
**"I can't SSH into my server"** → You blocked port 22 before enabling UFW. Contact your hosting provider for console access.
|
||||
|
||||
**"My tool works locally but not remotely"** → Port 80/443 isn't open. Run `ufw allow 80/tcp && ufw allow 443/tcp`.
|
||||
|
||||
**"I opened port 8080 and got hacked"** → Never expose app ports directly. Use a reverse proxy instead.
|
||||
|
||||
## 4. Docker Networks
|
||||
|
||||
Docker creates isolated **networks** for your containers. By default, containers in the same `docker-compose.yml` can talk to each other by service name:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
app:
|
||||
image: myapp:latest
|
||||
depends_on:
|
||||
- db # Can reach the database at "db:5432"
|
||||
|
||||
db:
|
||||
image: postgres:16
|
||||
# No "ports:" = not accessible from outside Docker
|
||||
```
|
||||
|
||||
### When to Create Custom Networks
|
||||
|
||||
If you need containers from **different** Compose files to communicate (e.g., a shared Caddy reverse proxy):
|
||||
|
||||
```yaml
|
||||
# In your Caddyfile's docker-compose.yml
|
||||
networks:
|
||||
proxy:
|
||||
external: true
|
||||
|
||||
# In your app's docker-compose.yml
|
||||
networks:
|
||||
default:
|
||||
name: proxy
|
||||
external: true
|
||||
```
|
||||
|
||||
Create the shared network first:
|
||||
|
||||
```bash
|
||||
docker network create proxy
|
||||
```
|
||||
|
||||
Now all containers on the `proxy` network can reach each other by service name — across different Compose files.
|
||||
|
||||
## Quick Reference
|
||||
|
||||
```bash
|
||||
# See what's listening on which port
|
||||
ss -tlnp
|
||||
|
||||
# Test if a port is open from outside
|
||||
nc -zv your-server-ip 443
|
||||
|
||||
# See Docker networks
|
||||
docker network ls
|
||||
|
||||
# Check DNS resolution
|
||||
dig plausible.yourdomain.com
|
||||
nslookup plausible.yourdomain.com
|
||||
```
|
||||
|
||||
## Next Steps
|
||||
|
||||
→ [Reverse Proxies Explained](/concepts/reverse-proxies) — Route traffic from domains to containers
|
||||
→ [SSL/TLS for Self-Hosters](/concepts/ssl-tls) — Encrypt your traffic
|
||||
→ [Environment Variables & Secrets](/concepts/env-secrets) — Secure your configuration
|
||||
56
docs/app/concepts/page.mdx
Normal file
56
docs/app/concepts/page.mdx
Normal file
@@ -0,0 +1,56 @@
|
||||
---
|
||||
title: "Concepts"
|
||||
description: "The foundational knowledge for self-hosting. Docker, networking, security, backups — explained like you're a human, not a sysadmin."
|
||||
---
|
||||
|
||||
# Concepts
|
||||
|
||||
Before you deploy anything, understand the building blocks. These guides cover the **why** and **how** behind self-hosting infrastructure — no fluff, no PhD required.
|
||||
|
||||
> 📖 **Reading order matters.** Start from the top and work down. Each article builds on the one before it.
|
||||
|
||||
---
|
||||
|
||||
## The Foundations
|
||||
|
||||
These four are non-negotiable. Read them before your first deploy.
|
||||
|
||||
| # | Guide | What You'll Learn |
|
||||
|---|---|---|
|
||||
| 1 | [Docker in 10 Minutes](/concepts/docker-basics) | Images, containers, volumes, Docker Compose — the only 4 concepts you need |
|
||||
| 2 | [Networking for Self-Hosters](/concepts/networking) | Ports, DNS, firewalls, and why your tool isn't accessible from the internet |
|
||||
| 3 | [Reverse Proxies Explained](/concepts/reverse-proxies) | Map `app.yourdomain.com` to your containers with Caddy |
|
||||
| 4 | [SSL/TLS for Self-Hosters](/concepts/ssl-tls) | HTTPS, Let's Encrypt, and why it matters |
|
||||
|
||||
---
|
||||
|
||||
## Running in Production
|
||||
|
||||
Once your tools are deployed, keep them alive and healthy.
|
||||
|
||||
| # | Guide | What You'll Learn |
|
||||
|---|---|---|
|
||||
| 5 | [Environment Variables & Secrets](/concepts/env-secrets) | `.env` files, Docker secrets, and never hardcoding passwords again |
|
||||
| 6 | [Monitoring & Observability](/concepts/monitoring) | Know when things break before your users do |
|
||||
| 7 | [Updating & Maintaining Containers](/concepts/updates) | Safe update workflows, rollbacks, and automating the boring parts |
|
||||
| 8 | [Backups That Actually Work](/concepts/backups) | Database dumps, volume backups, and the 3-2-1 rule |
|
||||
|
||||
---
|
||||
|
||||
## Planning & Scaling
|
||||
|
||||
Before you buy a server (or a bigger one).
|
||||
|
||||
| # | Guide | What You'll Learn |
|
||||
|---|---|---|
|
||||
| 9 | [Hardware & VPS Sizing](/concepts/hardware) | How much RAM/CPU you actually need, and which providers are worth it |
|
||||
|
||||
---
|
||||
|
||||
## Ready to Deploy?
|
||||
|
||||
You've got the knowledge. Now put it to work:
|
||||
|
||||
→ [Deploy Guides](/deploy) — 65+ tools with Docker Compose configs
|
||||
→ [Quick Start](/quick-start) — Your first deployment in 5 minutes
|
||||
→ [Curated Stacks](/stacks) — Pre-built tool bundles for specific use cases
|
||||
113
docs/app/concepts/reverse-proxies/page.mdx
Normal file
113
docs/app/concepts/reverse-proxies/page.mdx
Normal file
@@ -0,0 +1,113 @@
|
||||
---
|
||||
title: Reverse Proxies Explained
|
||||
description: "What a reverse proxy does and why you need one. Set up Caddy or Nginx to serve your self-hosted tools on proper domains with automatic HTTPS."
|
||||
---
|
||||
|
||||
# Reverse Proxies Explained
|
||||
|
||||
Right now your tools run on ports like `:3001`, `:8000`, `:8080`. That's fine for testing, but you don't want users visiting `http://your-ip:8000`.
|
||||
|
||||
A **reverse proxy** maps clean domains to those ugly ports:
|
||||
|
||||
```
|
||||
plausible.yourdomain.com → localhost:8000
|
||||
uptime.yourdomain.com → localhost:3001
|
||||
supabase.yourdomain.com → localhost:8443
|
||||
```
|
||||
|
||||
It also handles **HTTPS** (SSL certificates) automatically.
|
||||
|
||||
## Which One to Use?
|
||||
|
||||
| Proxy | Our Take |
|
||||
|---|---|
|
||||
| **Caddy** ✅ | **Use this.** Automatic HTTPS, zero-config SSL, human-readable config. Built for self-hosters. |
|
||||
| **Nginx Proxy Manager** | GUI-first option. Great if you hate config files. Slightly more resource-heavy. |
|
||||
| **Traefik** | Powerful but complex. Built for Kubernetes. Overkill for most self-hosting setups. |
|
||||
| **Nginx (raw)** | The classic. Fine but verbose. No auto-SSL without certbot scripts. |
|
||||
|
||||
> 🏆 **The Verdict:** Start with Caddy. Seriously. The config file is 6 lines.
|
||||
|
||||
## Setting Up Caddy (Recommended)
|
||||
|
||||
### Step 1: Deploy Caddy
|
||||
|
||||
```yaml
|
||||
# docker-compose.yml
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
caddy:
|
||||
image: caddy:2-alpine
|
||||
container_name: caddy
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "80:80"
|
||||
- "443:443"
|
||||
volumes:
|
||||
- ./Caddyfile:/etc/caddy/Caddyfile
|
||||
- caddy_data:/data
|
||||
- caddy_config:/config
|
||||
|
||||
volumes:
|
||||
caddy_data:
|
||||
caddy_config:
|
||||
```
|
||||
|
||||
### Step 2: Configure Your Domains
|
||||
|
||||
Create a `Caddyfile` in the same directory:
|
||||
|
||||
```
|
||||
plausible.yourdomain.com {
|
||||
reverse_proxy localhost:8000
|
||||
}
|
||||
|
||||
uptime.yourdomain.com {
|
||||
reverse_proxy localhost:3001
|
||||
}
|
||||
|
||||
git.yourdomain.com {
|
||||
reverse_proxy localhost:3000
|
||||
}
|
||||
```
|
||||
|
||||
That's the entire config. Caddy automatically obtains and renews Let's Encrypt SSL certificates for every domain listed.
|
||||
|
||||
### Step 3: Point DNS
|
||||
|
||||
In your domain registrar (Cloudflare, Namecheap, etc.), add A records:
|
||||
|
||||
| Type | Name | Value |
|
||||
|---|---|---|
|
||||
| A | `plausible` | `your-server-ip` |
|
||||
| A | `uptime` | `your-server-ip` |
|
||||
| A | `git` | `your-server-ip` |
|
||||
|
||||
### Step 4: Start
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
Within 60 seconds, Caddy will obtain SSL certificates and your tools will be live on proper HTTPS domains.
|
||||
|
||||
## How It Works (Simplified)
|
||||
|
||||
```
|
||||
User visits plausible.yourdomain.com
|
||||
↓
|
||||
DNS resolves to your server IP
|
||||
↓
|
||||
Caddy receives the request on port 443
|
||||
↓
|
||||
Caddy reads Caddyfile: "plausible.yourdomain.com → localhost:8000"
|
||||
↓
|
||||
Caddy forwards the request to your Plausible container
|
||||
↓
|
||||
User sees Plausible dashboard over HTTPS 🔒
|
||||
```
|
||||
|
||||
→ [Setting Up a Reverse Proxy (Practical Guide)](/quick-start/reverse-proxy) — Get Nginx, Caddy, or Traefik running now
|
||||
→ [SSL/TLS for Self-Hosters](/concepts/ssl-tls) — Deep dive into certificates and security
|
||||
→ [Deploy Guides](/deploy) — All our guides include reverse proxy config
|
||||
56
docs/app/concepts/ssl-tls/page.mdx
Normal file
56
docs/app/concepts/ssl-tls/page.mdx
Normal file
@@ -0,0 +1,56 @@
|
||||
---
|
||||
title: "SSL/TLS for Self-Hosters"
|
||||
description: "HTTPS for your self-hosted tools. How SSL works, why you need it, and how to set it up with Caddy or Let's Encrypt."
|
||||
---
|
||||
|
||||
# SSL/TLS for Self-Hosters
|
||||
|
||||
**SSL/TLS** is what makes the padlock appear in your browser. It encrypts traffic between your users and your server so nobody can snoop on it.
|
||||
|
||||
Every self-hosted tool accessible from the internet **must** have HTTPS. No exceptions.
|
||||
|
||||
## The Easy Way: Caddy (Automatic)
|
||||
|
||||
If you followed our [reverse proxy guide](/concepts/reverse-proxies) and are using Caddy, **you already have SSL**. Caddy obtains and renews Let's Encrypt certificates automatically for every domain in your Caddyfile.
|
||||
|
||||
No config needed. No cron jobs. No certbot. It just works.
|
||||
|
||||
> 🔥 **Pro Tip:** This is the #1 reason we recommend Caddy over Nginx.
|
||||
|
||||
## The Manual Way: Let's Encrypt + Certbot
|
||||
|
||||
If you're using raw Nginx, you'll need certbot:
|
||||
|
||||
```bash
|
||||
# Install certbot
|
||||
apt install certbot python3-certbot-nginx -y
|
||||
|
||||
# Obtain a certificate
|
||||
certbot --nginx -d plausible.yourdomain.com
|
||||
|
||||
# Verify auto-renewal
|
||||
certbot renew --dry-run
|
||||
```
|
||||
|
||||
Certbot will modify your Nginx config automatically and set up a cron job for renewal.
|
||||
|
||||
## SSL Checklist
|
||||
|
||||
After setting up SSL, verify:
|
||||
|
||||
- [ ] Site loads on `https://` (padlock visible)
|
||||
- [ ] `http://` redirects to `https://` automatically
|
||||
- [ ] Certificate is from Let's Encrypt (click padlock → "Certificate")
|
||||
- [ ] No mixed-content warnings in browser console
|
||||
|
||||
## Common Gotchas
|
||||
|
||||
**"Certificate not found"** → Your DNS hasn't propagated yet. Wait 5–10 minutes and try again.
|
||||
|
||||
**"Too many requests"** → Let's Encrypt rate-limits to 50 certificates/week per domain. If you're testing, use `--staging` flag first.
|
||||
|
||||
**"Connection refused on port 443"** → Port 443 isn't open in your firewall. Run: `ufw allow 443/tcp`
|
||||
|
||||
## Next Steps
|
||||
|
||||
→ [Backups That Actually Work](/concepts/backups) — Protect the data you're securing with SSL
|
||||
153
docs/app/concepts/updates/page.mdx
Normal file
153
docs/app/concepts/updates/page.mdx
Normal file
@@ -0,0 +1,153 @@
|
||||
---
|
||||
title: "Updating & Maintaining Containers"
|
||||
description: "How to safely update self-hosted tools running in Docker. Update workflows, rollbacks, and optional automation with Watchtower."
|
||||
---
|
||||
|
||||
# Updating & Maintaining Containers
|
||||
|
||||
Your tools need updates — security patches, bug fixes, new features. But updating a self-hosted tool isn't like clicking "Update" in an app store. You need a process.
|
||||
|
||||
## The Safe Update Workflow
|
||||
|
||||
Follow this **every time** you update a tool:
|
||||
|
||||
```bash
|
||||
# 1. Backup first (ALWAYS)
|
||||
docker exec my-db pg_dump -U postgres mydb > backup_$(date +%Y%m%d).sql
|
||||
|
||||
# 2. Pull the new image
|
||||
docker compose pull
|
||||
|
||||
# 3. Recreate containers with new image
|
||||
docker compose up -d
|
||||
|
||||
# 4. Check logs for errors
|
||||
docker compose logs -f --tail=50
|
||||
|
||||
# 5. Verify the tool works
|
||||
curl -I https://app.yourdomain.com
|
||||
```
|
||||
|
||||
> ⚠️ **Golden Rule:** Never update without a backup. If something breaks, you can roll back in 60 seconds.
|
||||
|
||||
## Rolling Back
|
||||
|
||||
Something went wrong? Here's how to revert to the previous version:
|
||||
|
||||
### Option 1: Pin to Previous Version
|
||||
|
||||
```yaml
|
||||
# docker-compose.yml — change the tag
|
||||
services:
|
||||
app:
|
||||
image: plausible/analytics:v2.0.0 # Was :v2.1.0
|
||||
```
|
||||
|
||||
```bash
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
### Option 2: Restore From Backup
|
||||
|
||||
```bash
|
||||
# Stop the broken service
|
||||
docker compose down
|
||||
|
||||
# Restore the database backup
|
||||
cat backup_20260218.sql | docker exec -i my-db psql -U postgres mydb
|
||||
|
||||
# Start with the old image
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
## Image Tags: `latest` vs Pinned Versions
|
||||
|
||||
| Approach | Pros | Cons |
|
||||
|---|---|---|
|
||||
| `image: app:latest` | Always gets newest | Can break unexpectedly |
|
||||
| `image: app:v2.1.0` | Predictable, reproducible | Manual updates required |
|
||||
| `image: app:2` | Gets patches within major version | Some risk of breaking changes |
|
||||
|
||||
> 🏆 **Our Recommendation:** Use **major version tags** (`image: postgres:16`) for databases and **pinned versions** (`image: plausible/analytics:v2.1.0`) for applications. Avoid `latest` in production.
|
||||
|
||||
## Automated Updates with Watchtower
|
||||
|
||||
If you want hands-off updates (with some risk), **Watchtower** watches your containers and auto-updates them:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
watchtower:
|
||||
image: containrrr/watchtower
|
||||
container_name: watchtower
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
environment:
|
||||
WATCHTOWER_CLEANUP: "true"
|
||||
WATCHTOWER_SCHEDULE: "0 0 4 * * *" # 4 AM daily
|
||||
WATCHTOWER_NOTIFICATIONS: "email"
|
||||
command: --include-restarting
|
||||
```
|
||||
|
||||
### Watchtower Caveats
|
||||
|
||||
- It updates **all** containers by default. Use labels to control which ones:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
plausible:
|
||||
image: plausible/analytics:latest
|
||||
labels:
|
||||
- "com.centurylinklabs.watchtower.enable=true"
|
||||
|
||||
database:
|
||||
image: postgres:16
|
||||
labels:
|
||||
- "com.centurylinklabs.watchtower.enable=false" # NEVER auto-update databases
|
||||
```
|
||||
|
||||
- It doesn't run migrations. Some tools need `docker exec app migrate` after updates.
|
||||
- It can't roll back automatically.
|
||||
|
||||
> ⚠️ **Never auto-update databases.** Postgres, MySQL, and Redis major version upgrades require manual migration steps. Always pin database images.
|
||||
|
||||
## Cleanup: Reclaiming Disk Space
|
||||
|
||||
Old images pile up. Docker doesn't clean them automatically:
|
||||
|
||||
```bash
|
||||
# See how much space Docker is using
|
||||
docker system df
|
||||
|
||||
# Remove unused images (safe)
|
||||
docker image prune -f
|
||||
|
||||
# Nuclear option: remove ALL unused data
|
||||
docker system prune -a -f --volumes
|
||||
# ⚠️ This deletes stopped containers, unused images, AND orphaned volumes
|
||||
```
|
||||
|
||||
### Automate Cleanup
|
||||
|
||||
Add to your crontab:
|
||||
|
||||
```bash
|
||||
# Weekly cleanup at 3 AM Sunday
|
||||
0 3 * * 0 docker image prune -f >> /var/log/docker-cleanup.log 2>&1
|
||||
```
|
||||
|
||||
## Update Checklist
|
||||
|
||||
Before updating any tool:
|
||||
|
||||
- [ ] Database backed up
|
||||
- [ ] Current version noted (in case of rollback)
|
||||
- [ ] Changelog reviewed for breaking changes
|
||||
- [ ] `.env` file backed up
|
||||
- [ ] Update applied and logs checked
|
||||
- [ ] Service verified working
|
||||
|
||||
## Next Steps
|
||||
|
||||
→ [Backups That Actually Work](/concepts/backups) — Make sure you can actually roll back
|
||||
→ [Monitoring & Observability](/concepts/monitoring) — Catch failed updates automatically
|
||||
Reference in New Issue
Block a user