Database Runtime
IntegraMon currently supports two real operating modes:
- SQLite
- PostgreSQL, including Neon-style hosted PostgreSQL variants
Supported backends
The backend switch comes from DB_BACKEND:
sqlitepostgresqlpostgrespostgres_neonpostgresql_neonneon
What is the real default
There is an important implementation nuance:
- Django falls back to
sqlitewhen noDB_BACKENDis provided deploy/scripts/boot.pyusespostgresqlin its own fallback JSON defaults
So the practical default depends on how the container was started:
- no bootstrap JSON and no existing
.env: Django behaves like SQLite - bootstrap JSON generated from Docker samples: usually PostgreSQL
SQLite versus PostgreSQL
SQLite is acceptable when
- one small team uses the system
- write concurrency is low
- the message volume is modest
- you mostly need a simple all-in-one deployment
- local file-based backup is acceptable
PostgreSQL is recommended when
- multiple users work in parallel
- message ingestion is continuous
- multiple Celery workers write at the same time
- archives, alerts, and metrics run regularly
- you need predictable operational recovery
Concurrency impact
SQLite is a single-file database. It works well for light loads but has natural write-lock limits.
In this codebase that matters because several processes can write concurrently:
- Django web requests
- multiple Celery workers
- the custom controller loop
- archive cleanup and metric snapshot jobs
SQLITE_TIMEOUT defaults to 20, which reduces immediate lock failures, but it does not change the fundamental single-writer behavior.
PostgreSQL is therefore the safer production choice once background work becomes meaningful.
Internal PostgreSQL mode
When all of the following are true:
DB_BACKEND=postgresqlDB_LOCAL=true- Supervisor is running
the container starts an internal PostgreSQL 17 cluster through:
sv_postgres.shsv_pg_init.shpg_init.sh
The helper scripts:
- create the cluster if needed
- listen on port
5432 - set or reset the DB user password
- create the target database if missing
- then run Django migrations
This is useful for compact environments, but it still means the database lives inside the application container lifecycle unless you mount persistent storage.
External PostgreSQL mode
When DB_LOCAL=false, the internal database daemon is skipped. Django still uses PostgreSQL if DB_BACKEND says so, therefore you must also provide:
DB_HOSTDB_PORTDB_NAMEDB_USERDB_PASSWORD
For hosted PostgreSQL or Neon-style setups you typically also set:
DB_SSLMODE=require- optionally
DB_CHANNEL_BINDING=require
Migrations
Migrations are executed automatically during startup:
- SQLite path:
sv_migrate.shrunsmanage.py migrate - PostgreSQL local path:
app_migrate.sh - PostgreSQL remote path:
db_wait_and_migrate.sh
The runtime waits for /tmp/migrate.done before Gunicorn, Celery workers, the controller loop, and Nginx are considered ready.
Operational consequence:
- schema drift is corrected on each restart
- failed migrations can still leave the container partially started, so migration logs matter
Connection pooling
There is no built-in Django connection pool configuration such as CONN_MAX_AGE or psycopg_pool in the current settings.
What exists today:
- direct Django connections
PGCONNECT_TIMEOUTviaDB_CONNECT_TIMEOUTfor readiness checks
If you need pooling, introduce it at the infrastructure layer:
- managed pooler such as Neon pooler
- PgBouncer
- separate connection management on the database side
Storage growth behavior
The largest tables in practice are expected to be around:
cpiMessageLogcpiMessageLogRunscpiCustomHeaderPropertiescpiPayloadcpiMessageAttachment- archive tracking and metric snapshot tables over time
Growth drivers are:
- message polling frequency
- payload downloads
- deep artifact sync
- archive retention choices
- metric snapshot retention strategy
Cleanup and vacuum behavior
Archive jobs include explicit backend-specific cleanup:
- SQLite:
wal_checkpoint(TRUNCATE)plusVACUUM - PostgreSQL:
VACUUM FULL ANALYZE
This improves reclaimed space after archive deletion, but it also means:
- SQLite archive runs can block heavy write workloads
- PostgreSQL
VACUUM FULLis operationally heavier than a light vacuum
Backup guidance
SQLite
Recommended approach:
- stop the app or at least avoid concurrent write peaks
- copy the SQLite file under
DATA_DIR - include
/app/dataarchives and logs if they matter to your support model
PostgreSQL
Recommended approach:
- use normal PostgreSQL backups such as
pg_dumpor storage-level snapshots - if running external PostgreSQL, use the provider backup tooling
- include application-level archive directories if they are part of recovery expectations
Production recommendation
Use PostgreSQL for:
- shared environments
- meaningful background-job throughput
- anything customer-facing
Use SQLite only for:
- local demos
- single-admin pilots
- light test environments