Alerting Troubleshooting
Why is my alert missing?
Start with these checks in order:
- Confirm the tenant itself is active.
- Confirm
jobs.alerts.config.enabledistrue. - Confirm the specific alert definition inside
jobs.alerts.config.alerts[]hasenabled: true. - Confirm the upstream runtime source exists and is fresh.
- Confirm the alert loop is no longer waiting on packages.
Typical root causes
| Symptom | Likely cause | Where to look |
|---|---|---|
| no alerts at all | jobs.alerts disabled | tenant config and Worker Runtime |
| message alerts missing | message sync stale or filtered | Message Popups and worker runtime |
| iFlow alerts missing | package or artifact sync stale | Artifacts and worker runtime |
| keystore alerts missing | keystore sync stale | Keystore Entries |
| daily no-message alert missing | weekday or window mismatch | alert definition in daily_check |
Concrete checks
Check 1: alert config exists
Expected structure:
jobs.alerts.config.enabledjobs.alerts.config.repeat_intervaljobs.alerts.config.alerts[]
Typical example values:
repeat_interval = 60type = alert_messagestype = alert_iflowstype = alert_keystoretype = alert_iflow_no_messages_daily
Check 2: upstream data is available
Alerting evaluates stored runtime data, not live CPI responses.
Check these related areas:
- message alerts: Message Popups
- iFlow alerts: Artifacts
- keystore alerts: Keystore Entries
Check 3: worker dependency is satisfied
The alert loop explicitly waits for package completion first. If package sync is not considered done yet, alerts can be delayed even when alert config looks correct.
See Worker Runtime Troubleshooting.
Why do I only see acknowledged or outdated alerts?
Observed runtime states include:
alertedacknowledgedoutdated
Important meaning:
alertedmeans currently open in the alert lifecycleacknowledgedmeans a user changed the alert stateoutdatedmeans an earlier alert row was superseded by a newer row for the same trigger chain
Some overview queries explicitly exclude outdated, so a historical alert can exist in storage but not appear in the main operational overview.
Why is an alert counted as open for more than 48 hours?
Open-too-long filtering uses:
origin_time_alertedif present- otherwise
time_alerted
That means a renewed alert chain can still be treated as long-open if it inherits the original open time.
Why did acknowledgement not solve the problem?
Acknowledgement changes alert state in persistence, but it does not clear the technical condition itself.
If the underlying problem persists:
- the alert may remain logically relevant
- later evaluations may keep producing alert-state continuity
- the next place to inspect is the source runtime data, not the alert row itself
Fast diagnosis paths
- Missing message alert: check Message Popups Troubleshooting
- Missing keystore alert: check Keystore Entries
- Stale alert overview: check Worker Runtime Troubleshooting