How to Run OpenClaw 24/7 with Docker, Health Checks, and Backups
Complete Docker deployment guide for running OpenClaw around the clock with compose files, health checks, backups, monitoring, and multi-channel setup.
If OpenClaw only runs when your laptop is open, you lose most of the value. A stable 24/7 deployment turns it into infrastructure: always available, observable, and recoverable after failures.
This guide shows a practical Docker pattern for production-ish usage. Every section opens with a short answer capsule so you can scan the key points quickly, then dives into the details with copy-paste-ready examples.
What Reliable Means in Practice
A reliable OpenClaw deployment is one that restarts itself after crashes, detects unhealthy states within seconds, persists all data to durable volumes, and can be fully restored from backup in under ten minutes. If any of those properties are missing, you do not have a production-grade setup yet.
For an always-on assistant, reliability means:
- Automatic restart after crash or host reboot
- Fast health detection for broken states (stuck processes, expired tokens)
- Persistent data volumes that survive container recreation
- Clear logs with a retention policy that does not fill your disk
- Repeatable backup and restore process that has been tested at least once
The rest of this guide covers each point with concrete configuration.
Recommended Deployment Layout
The recommended Docker layout for OpenClaw uses a single docker-compose.yml file with the OpenClaw service, an optional Redis cache for session state, mounted volumes for persistent data, and resource limits to prevent runaway memory usage. This layout works on any Linux host, a mini PC, a cloud VM, or even a Raspberry Pi.
Infrastructure overview
- One host (mini PC, VM, or cloud instance with at least 1 GB RAM)
- One
docker-compose.ymlthat defines all services - Mounted volume for OpenClaw state at
./openclaw-data - Separate directory for backup snapshots at
./backups - A
.envfile for secrets (never committed to version control)
Complete docker-compose.yml
version: "3.8"
services:
openclaw:
image: openclaw/openclaw:latest
container_name: openclaw
restart: unless-stopped
command: gateway
env_file:
- .env
volumes:
- ./openclaw-data:/root/.openclaw
- ./logs:/var/log/openclaw
ports:
- "127.0.0.1:3100:3100"
healthcheck:
test: ["CMD", "openclaw", "status"]
interval: 30s
timeout: 10s
retries: 3
start_period: 15s
deploy:
resources:
limits:
memory: 1G
cpus: "1.0"
reservations:
memory: 256M
cpus: "0.25"
logging:
driver: json-file
options:
max-size: "10m"
max-file: "5"
networks:
- openclaw-net
redis:
image: redis:7-alpine
container_name: openclaw-redis
restart: unless-stopped
volumes:
- redis-data:/data
command: redis-server --maxmemory 128mb --maxmemory-policy allkeys-lru
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 15s
timeout: 5s
retries: 3
deploy:
resources:
limits:
memory: 256M
cpus: "0.5"
networks:
- openclaw-net
networks:
openclaw-net:
driver: bridge
volumes:
redis-data:
Key decisions in this file:
restart: unless-stoppedbrings the container back after crashes and reboots, but respects manual stops.- Port binding to
127.0.0.1prevents external access. Use a reverse proxy (Nginx, Caddy) if you need public endpoints. - Resource limits prevent a runaway process from taking down the host. Adjust the memory limit upward if you run local LLM inference.
- Redis provides optional session caching. OpenClaw works without it, but response times improve when conversation context is cached.
Secrets and Environment
Store all API keys and channel tokens in a .env file with restricted permissions, never inside the Docker image or the compose file itself. Rotate keys on a fixed schedule (every 90 days minimum) and use Docker secrets for production deployments that require stricter isolation.
Complete .env template
Create a .env file in the same directory as your docker-compose.yml:
# .env — OpenClaw Docker configuration
# Permissions: chmod 600 .env
# === AI Provider Keys ===
OPENAI_API_KEY=sk-your-openai-key-here
ANTHROPIC_API_KEY=sk-ant-your-anthropic-key-here
# === Channel Credentials ===
TELEGRAM_BOT_TOKEN=123456:ABC-your-telegram-token
WHATSAPP_SESSION_ID=openclaw-wa-session
DISCORD_BOT_TOKEN=your-discord-bot-token
# === OpenClaw Settings ===
OPENCLAW_DATA_DIR=/root/.openclaw
OPENCLAW_LOG_LEVEL=info
OPENCLAW_HEALTH_PORT=3100
OPENCLAW_REDIS_URL=redis://redis:6379
# === Optional: Backup ===
BACKUP_S3_BUCKET=my-openclaw-backups
AWS_ACCESS_KEY_ID=your-aws-key
AWS_SECRET_ACCESS_KEY=your-aws-secret
Lock down the file immediately after creation:
chmod 600 .env
Using Docker secrets for stricter isolation
For deployments where the .env approach is not sufficient (shared hosts, compliance requirements), use Docker secrets:
# Add to docker-compose.yml
services:
openclaw:
secrets:
- openai_key
- anthropic_key
secrets:
openai_key:
file: ./secrets/openai_key.txt
anthropic_key:
file: ./secrets/anthropic_key.txt
Inside the container, secrets are available at /run/secrets/<secret_name>.
Key rotation procedure
- Generate the new key from your provider dashboard.
- Update the
.envfile (or the secret file). - Restart only the OpenClaw container:
docker compose restart openclaw. - Verify the new key works:
docker compose exec openclaw openclaw status. - Revoke the old key from the provider dashboard.
Never revoke the old key before confirming the new one works. If something goes wrong during rotation, consult the incident response playbook for structured troubleshooting steps.
Logs and Monitoring
Docker’s json-file logging driver with size rotation is the minimum viable monitoring setup for OpenClaw. Configure a max log size of 10 MB with 5 rotated files to prevent disk exhaustion, and add a health check endpoint so orchestration tools can detect stuck processes within 30 seconds.
Logging driver configuration
The docker-compose.yml above already includes log rotation. Here is what each option does:
logging:
driver: json-file
options:
max-size: "10m" # Rotate after 10 MB
max-file: "5" # Keep 5 rotated files (50 MB total max)
To view live logs:
docker compose logs -f openclaw --tail 100
Health check details
The health check defined in the compose file calls openclaw status every 30 seconds. Docker marks the container as unhealthy after 3 consecutive failures, which triggers automatic restart via the restart: unless-stopped policy.
Check health status manually:
docker inspect --format='{{.State.Health.Status}}' openclaw
Optional: Prometheus and Grafana stack
For teams that want dashboards and alerting, add a monitoring sidecar:
# Append to docker-compose.yml services section
prometheus:
image: prom/prometheus:latest
container_name: openclaw-prometheus
restart: unless-stopped
volumes:
- ./monitoring/prometheus.yml:/etc/prometheus/prometheus.yml
- prometheus-data:/prometheus
ports:
- "127.0.0.1:9090:9090"
networks:
- openclaw-net
grafana:
image: grafana/grafana:latest
container_name: openclaw-grafana
restart: unless-stopped
volumes:
- grafana-data:/var/lib/grafana
ports:
- "127.0.0.1:3000:3000"
networks:
- openclaw-net
Add prometheus-data and grafana-data to the volumes: section at the bottom of the file. Track these metrics at minimum: container restart count, memory usage, response latency, and health check failure rate.
Simple rule: if you cannot answer “what failed and when,” observability is not ready.
Backups and Recovery Drill
Backup only matters if restore is proven. The recommended approach is a daily volume snapshot with 7-day rolling retention, automated via cron, and tested monthly by restoring to a separate container. Skip any of these steps and you will discover the gap during an actual incident.
Backup script
Save this as backup-openclaw.sh in your project root:
#!/usr/bin/env bash
set -euo pipefail
BACKUP_DIR="./backups"
DATA_DIR="./openclaw-data"
TIMESTAMP=$(date +%Y%m%d-%H%M%S)
BACKUP_FILE="${BACKUP_DIR}/openclaw-backup-${TIMESTAMP}.tar.gz"
RETENTION_DAYS=7
mkdir -p "${BACKUP_DIR}"
# Stop the container briefly for a consistent snapshot
docker compose stop openclaw
# Create compressed archive
tar -czf "${BACKUP_FILE}" -C "$(dirname "${DATA_DIR}")" "$(basename "${DATA_DIR}")"
# Restart immediately
docker compose start openclaw
# Remove backups older than retention period
find "${BACKUP_DIR}" -name "openclaw-backup-*.tar.gz" -mtime +${RETENTION_DAYS} -delete
echo "Backup complete: ${BACKUP_FILE}"
echo "Size: $(du -h "${BACKUP_FILE}" | cut -f1)"
Make it executable:
chmod +x backup-openclaw.sh
Automate with cron
Run the backup daily at 3 AM:
crontab -e
# Add this line:
0 3 * * * cd /opt/openclaw && ./backup-openclaw.sh >> /var/log/openclaw-backup.log 2>&1
Optional: push backups to S3
Add this to the end of the backup script to copy to remote storage:
if [ -n "${BACKUP_S3_BUCKET:-}" ]; then
aws s3 cp "${BACKUP_FILE}" "s3://${BACKUP_S3_BUCKET}/backups/"
echo "Uploaded to s3://${BACKUP_S3_BUCKET}/backups/"
fi
Step-by-step restore procedure
- Stop the running container:
docker compose stop openclaw - Move the current data directory aside:
mv ./openclaw-data ./openclaw-data.old - Extract the backup:
tar -xzf ./backups/openclaw-backup-YYYYMMDD-HHMMSS.tar.gz - Start the container:
docker compose start openclaw - Verify health:
docker inspect --format='{{.State.Health.Status}}' openclaw - Test by sending a message through any connected channel.
- Once confirmed, remove the old data:
rm -rf ./openclaw-data.old
Run a monthly recovery drill. Most teams discover missing files only during the first drill.
Multi-Platform Channel Configuration in Docker
OpenClaw connects to WhatsApp, Telegram, and Discord simultaneously from a single Docker container. Each channel is enabled through environment variables in the .env file, and channel-specific state is persisted in the mounted data volume so sessions survive container restarts.
Enabling channels
Set the relevant tokens in your .env file (see the template above), then configure channels via the OpenClaw CLI inside the container:
# Telegram
docker compose exec openclaw openclaw channel add telegram --token "${TELEGRAM_BOT_TOKEN}"
# Discord
docker compose exec openclaw openclaw channel add discord --token "${DISCORD_BOT_TOKEN}"
# WhatsApp (QR-based auth)
docker compose exec openclaw openclaw channel add whatsapp --session "${WHATSAPP_SESSION_ID}"
For a detailed Discord setup including permissions, channel-specific behaviors, and monitoring, see the Discord AI bot setup guide. WhatsApp-specific configuration is covered in the WhatsApp AI bot guide.
Channel isolation
Each channel runs in its own thread within the OpenClaw gateway process. If one channel fails (for example, a revoked Discord token), the other channels continue operating. You will see the failure in the health check output and logs:
docker compose logs openclaw | grep -i "channel"
Restart a single channel without restarting the entire container:
docker compose exec openclaw openclaw channel restart discord
Updating and Rolling Deployments
Pull the latest OpenClaw image, recreate the container, and verify health before removing the old image. For zero-downtime updates, pin your current working version as a fallback tag so you can roll back in under 60 seconds if the new version has issues.
Standard update procedure
# 1. Pull the new image
docker compose pull openclaw
# 2. Recreate only the openclaw service
docker compose up -d --no-deps openclaw
# 3. Watch health checks pass
docker compose logs -f openclaw --tail 50
# 4. Verify
docker inspect --format='{{.State.Health.Status}}' openclaw
Version pinning strategy
Do not use latest in production once your setup is stable. Pin to a specific version:
image: openclaw/openclaw:1.4.2
Before upgrading, tag your current working image locally:
docker tag openclaw/openclaw:1.4.2 openclaw/openclaw:known-good
Rollback procedure
If the new version causes problems:
# Stop the broken container
docker compose stop openclaw
# Edit docker-compose.yml to revert the image tag
# image: openclaw/openclaw:known-good
# Start with the previous version
docker compose up -d openclaw
The entire rollback takes under 60 seconds because the old image is already cached locally. For structured incident handling when things go wrong, follow the incident response playbook.
Resource Sizing Guide
OpenClaw running in Docker requires at least 512 MB of RAM and 1 CPU core for a single-channel deployment with a cloud LLM provider. Multi-channel setups need 1 GB RAM, and local LLM inference requires 4 GB or more depending on the model size.
| Use Case | RAM | CPU | Disk | Notes |
|---|---|---|---|---|
| Single channel, cloud LLM | 512 MB | 1 core | 1 GB | Telegram or Discord only |
| Multi-channel, cloud LLM | 1 GB | 1-2 cores | 2 GB | All channels active |
| Multi-channel + Redis | 1.5 GB | 2 cores | 3 GB | Faster response caching |
| Local LLM (7B params) | 8 GB | 4 cores | 15 GB | Requires GPU passthrough for acceptable speed |
| Raspberry Pi 4/5 | 2-4 GB | 4 cores (ARM) | 8 GB | Cloud LLM only, see Pi playbook |
Adjust the deploy.resources.limits section in your docker-compose.yml to match your use case. Setting limits too low causes OOM kills; setting them too high wastes resources on shared hosts.
Security Hardening Basics
- Restrict exposed ports to required services only (bind to
127.0.0.1) - Run host updates on a schedule (
apt upgradeweekly via cron) - Use separate credentials per channel integration
- Remove unused skills and integrations
- Never run the container as root in production if your setup supports rootless Docker
Security posture should improve before traffic grows, not after.
Recommended Rollout Path
- Start with one channel and one workflow
- Verify a full week of stable uptime
- Add additional channels gradually
- Revisit prompt guardrails and cost controls
For installation references, use the installation guide and channel-specific setup guides.
FAQ
What are the minimum system requirements for running OpenClaw in Docker?
You need at least 512 MB of RAM, 1 CPU core, and 1 GB of disk space for a single-channel deployment using a cloud LLM provider like OpenAI or Anthropic. Docker Engine 20.10 or later is required. Any Linux distribution, macOS, or Windows with WSL2 will work. For ARM-based systems like a Raspberry Pi 4, the same minimums apply but you are limited to cloud LLM providers since local model inference is too slow on ARM. See the Raspberry Pi playbook for Pi-specific instructions.
How do I update OpenClaw without downtime?
Run docker compose pull openclaw followed by docker compose up -d --no-deps openclaw. Docker will pull the new image, stop the old container, and start a new one. The total interruption is typically under 5 seconds. Pin your current working version with docker tag before upgrading so you can roll back instantly if the new version has issues. For a zero-interruption approach on critical deployments, run two instances behind a load balancer and update them one at a time.
Can I run OpenClaw Docker on a Raspberry Pi?
Yes. OpenClaw publishes ARM64 images that run on Raspberry Pi 4 and Pi 5 with 2 GB or more of RAM. Use cloud LLM providers (OpenAI, Anthropic) instead of local models, since even a Pi 5 lacks the compute power for local inference at acceptable latency. Apply the memory limits in the compose file to leave headroom for the OS. The full setup is covered in the Raspberry Pi AI assistant playbook.
A 24/7 OpenClaw deployment is not about complexity. It is about disciplined defaults, predictable recovery, and incremental rollout. Start with the compose file above, lock down your secrets, prove your backup works, and expand from there.
Ready to Get Started?
Install OpenClaw and build your own AI assistant today.
Related Articles
How to Create Your Own Personal AI Assistant in 2026
Build a private AI assistant that runs on your computer. Connect to all your messaging apps, customize its personality, and keep your data completely private.
ClawHub Skill Registry: Discover and Install 5,700+ OpenClaw Skills
Complete guide to browsing, installing, and managing OpenClaw skills from the ClawHub registry with over 5,700 community plugins.
Discord AI Bot Setup Guide: Build a Reliable Multi-Channel Assistant
Step-by-step guide to setting up an OpenClaw Discord bot with permissions, multi-channel strategy, monitoring, and security for teams and communities.