AI and LLM Operating Notes
This page connects AI- and LLM-related administration with the surrounding security and data-scope model.
Relevant platform entry points
| Route | Intended role | Operational meaning |
|---|---|---|
/admin/ai/llm | company admin | company-level model and AI configuration |
/superadmin/ai/llm | superadmin | platform-wide model and AI operating context |
Why AI settings are not isolated
AI and LLM configuration depends on more than model connectivity. It is also shaped by:
- company scope
- role and route guards
- restricted resources such as
payloadsandcustom_headers - tenant-level sensitivity choices in
AI Scenarios
Sensitivity and data-exposure matrix
| Topic | Why it matters for AI or LLM use |
|---|---|
| payload visibility | prompts or summaries should not assume readable body content when payloads are restricted |
| custom header masking | masked headers can change the context available to AI-supported flows |
| focused or tag scope | the visible tenant subset may be intentionally narrow |
| company isolation | model-backed workflows must not blur company boundaries |
| admin versus superadmin area | one config affects company scope, the other can affect the whole platform |
Safe operating questions
- Is the feature running in company scope or platform scope?
- Does the user actually have access to the underlying message content?
- Are masked values being interpreted as absent data instead of intentionally hidden data?
- Do tenant-level sensitivity settings align with the intended AI use case?
Relationship to message analysis
AI-backed help around messages and payloads is only as reliable as the visible source context. If payloads or headers are hidden, the limitation is a security feature, not necessarily a data-quality problem.