Docker Setup
IntegraMon is built as a multi-process container. One image contains:
- Nginx
- Gunicorn with the Django ASGI app
- Redis when
REDIS_LOCAL=true - PostgreSQL when
DB_LOCAL=trueandDB_BACKEND=postgresql - multiple Celery workers
- the custom
cpi/worker.pycontroller loop - the PDF export toolchain with Chromium and Mermaid CLI
Supervisor starts and coordinates those processes.
Runtime boot sequence
At container startup the sequence is:
deploy/scripts/boot.pyturns JSON config into/app/backend/src/.envdeploy/start.shsources that.envmount.pyoptionally mounts Koofr, SMB, or SSHFS targetsDATA_DIRandLOG_DIRare resolved- Nginx config is rendered from templates unless
NGINX_CONF_OVERRIDEis used - Supervisor starts PostgreSQL, Redis, migration, web, Celery, controller, and Nginx processes
This is why Docker configuration is not only a set of docker run -e ... variables. The boot layer rewrites and normalizes the effective runtime env first.
How configuration reaches the container
There are two supported bootstrap channels:
CONFIG_JSON_B64- a mounted file at
/run/secrets/app-config.json
Both are expected to contain JSON with upper-case keys such as:
{
"DB_BACKEND": "postgresql",
"DB_HOST": "127.0.0.1",
"DB_PORT": 5432,
"DB_NAME": "monitorx",
"DB_USER": "postgres",
"DB_PASSWORD": "secret",
"CELERY_BROKER_URL": "redis://localhost:6379/0",
"REDIS_CACHE_URL": "redis://localhost:6379/2",
"DATA_DIR": "/app/data"
}
Why CONFIG_JSON_B64 exists
Base64 JSON is useful because:
- it keeps one structured payload instead of many
-e KEY=valueflags - it works well in platforms that inject only environment variables
- it avoids quoting problems for JSON strings inside shell commands
- it can carry the same keys that later become
.enventries
Inside the container, boot.py:
- strips whitespace
- base64-decodes the payload
- parses it as JSON
- writes normalized
KEY=valuelines into/app/backend/src/.env
If CONFIG_JSON_B64 is invalid, boot stops immediately.
Actual precedence inside Docker
The effective order in the running container is:
CONFIG_JSON_B64/run/secrets/app-config.json- pre-existing
/app/backend/src/.env boot.pyfallback defaults- Django setting-level defaults
- runtime data in the database such as worker tuning, metric intervals, tenant config, templates, and cleanup settings
This matters because some settings are split across layers:
- infrastructure settings such as DB or Redis come from env
- application defaults may still apply if a key is missing
- tenant behavior often comes from
cConfigExt - platform tuning often comes from
cMetricSettingsandcWorkerTuningSettings
Example: local all-in-one container
docker run -d \
--name integramon-local \
-p 80:80 \
-v "$(pwd)/data:/app/data" \
-v "$(pwd)/deploy/env/postgreslocal.docker.json:/run/secrets/app-config.json:ro" \
--restart unless-stopped \
integramon:latest
This pattern uses:
- internal PostgreSQL
- internal Redis
- persisted application data under
/app/data - generated Nginx config from the HTTP template
Example: external PostgreSQL and Redis
docker run -d \
--name integramon-prod \
-p 80:80 \
-e APP_BASE_PATH="/integramon" \
-e FRONTEND_BASE_URL="https://example.com/integramon" \
-e ENABLE_SSL="false" \
-e CONFIG_JSON_B64="$CONFIG_JSON_B64" \
-v "$(pwd)/data:/app/data" \
--restart unless-stopped \
integramon:latest
In this model the JSON payload should set:
DB_LOCAL=falseREDIS_LOCAL=false- external
DB_HOST,DB_USER,DB_PASSWORD - external
CELERY_BROKER_URL,CELERY_RESULT_BACKEND,REDIS_CACHE_URL, and ideallyCHANNELS_REDIS_URL
Example: compose pattern
The repository docker-compose.yml currently shows a development-style split with:
postgresredisapp
For production, a more realistic compose setup should also mount persistent data and pass explicit runtime config:
services:
app:
image: integramon:latest
restart: unless-stopped
ports:
- "80:80"
environment:
APP_BASE_PATH: /integramon
FRONTEND_BASE_URL: https://example.com/integramon
CONFIG_JSON_B64: ${CONFIG_JSON_B64}
volumes:
- ./data:/app/data
Volumes and persistence
Recommended persistent mounts:
/app/data- optionally custom certificate paths referenced by
SSL_CERTandSSL_KEY - optionally a custom Nginx config referenced by
NGINX_CONF_OVERRIDE - optionally
/run/secrets/app-config.json
What is stored in /app/data depends on the selected mode:
- SQLite database file when
DB_BACKEND=sqlite - tenant archives
- tenant job logs
- runtime storage directories resolved from
cConfigExt
What is not persisted automatically unless you mount it deliberately:
/var/log/nginx- internal PostgreSQL cluster files
- internal Redis memory state, because internal Redis runs without RDB or AOF persistence
For production, that means:
- do not rely on internal Redis for durable queue recovery
- prefer external PostgreSQL or mount PostgreSQL data if you keep the in-container database
Internal versus external services
DB_LOCAL and REDIS_LOCAL only decide whether Supervisor starts the internal daemons.
They do not automatically rewrite Django connection settings.
Examples:
REDIS_LOCAL=falsewith no external Redis URLs still leaves Django pointing at default local Redis URLs and will failDB_LOCAL=falsewithDB_BACKEND=postgresqlstill requires valid external Postgres credentials
Reverse proxy and subpath handling
APP_BASE_PATH is used in three places:
- Nginx location blocks
- runtime-generated
/var/www/html/app-config.js - rewritten
<base href>insideindex.html
This is why the same frontend build can run both at / and under a subpath such as /integramon.
HTTPS behavior
If ENABLE_SSL=true, start.sh renders the HTTPS Nginx template and expects:
SERVER_NAMESSL_CERTSSL_KEY
If these paths are missing or wrong, Nginx startup will fail.
For deployments behind an upstream reverse proxy or load balancer, it is usually simpler to keep ENABLE_SSL=false and terminate TLS upstream.
Logging and restart behavior
Supervisor writes process logs into LOG_DIR, typically:
postgres.logredis.logmigrate.loggunicorn.logcelery-*.logworker.lognginx.log
The container CMD is bash /start.sh, so container restarts rerun the whole bootstrap sequence.
Recommended Docker restart policy:
unless-stoppedfor long-lived environments
PDF generation in Docker
The image already installs everything required by manage.py export_docs_pdfs:
- Chromium
- Mermaid CLI
- Cairo and Pango libraries
- the documentation tree itself
The Docker build even runs:
python /app/backend/src/manage.py export_docs_pdfs --output-dir /app/generated-docs
That means new Markdown pages added under docs/integramon/docs and docs/integramon/sysdocs must stay compatible with the same export pipeline. Plain Markdown, standard tables, and Mermaid blocks are safe choices.