Appearance
Alert Inbox and Triage
The Alert Inbox collects every fired alert across the fleet in one place and gives operators the tools to triage, annotate, escalate to investigations, and trigger AI-assisted analysis without leaving the view.
Alert status lifecycle
Alerts move through the following statuses:
| Status | Meaning |
|---|---|
new | Alert has fired and has not been reviewed |
acknowledged | Operator has seen the alert |
in_progress | Actively being investigated |
resolved_true_positive | Confirmed real incident, resolved |
resolved_false_positive | Confirmed false positive, resolved |
resolved | Resolved without a specific classification |
false_positive | Marked as a false positive without full resolution flow |
suppressed | Suppressed by a suppression rule |
Status updates require analyst role or higher.
Reading the alert list
Filters available:
| Parameter | Description |
|---|---|
status | One of the statuses above |
severity | critical, high, medium, or low |
sensor_id | Filter to a specific sensor |
source | Alert source (rule name or type) |
page | Page number (1-indexed) |
page_size | Results per page (1 to 200, default 50) |
In the Console, navigate to Alerts and use the Status and Severity filters in the filter bar to narrow the list. The badge in the navigation menu shows the current count of new-status alerts.
Updating alert status
Requires analyst role.
In the Console, open the alert in Alerts and click Update Status. Select the new status from the dropdown, optionally set an assignee, and choose a resolution classification if closing the alert.
resolution values: true_positive, false_positive, benign
The assignee field records who is responsible for the alert. The resolution field is optional; use it when closing the alert with a specific classification.
Adding triage notes
Notes are timestamped and attributed to the operator who added them. Maximum 2,000 characters per note.
Requires analyst role.
In the Console, open the alert in Alerts. The Notes panel is at the bottom of the detail view. Click Add Note, type the note text, and submit. All notes for the alert are listed in the same panel in chronological order.
AI triage
When the LLM provider is configured, the Console automatically runs L1 and L2 triage on new alerts. L1 produces a fast severity verdict, a false-positive likelihood flag, and a flag indicating whether L2 analysis is recommended. L2 produces a narrative, recommended actions, and MITRE ATT&CK technique mapping.
To re-run AI triage on an existing alert, open the alert in Alerts and click Re-analyze (requires operator role). The button is disabled if no LLM provider is configured. See AI Assistant for LLM configuration.
Export
In the Console, navigate to Alerts and apply the desired status and severity filters. Click Export CSV to download up to 10,000 matching alerts as a CSV file, suitable for SIEM import or offline analysis.
Alert rules
Alert rules define when a fired alert is created. Each rule targets a specific event kind and fires when matching events arrive.
Rule types
| Type | Behavior |
|---|---|
standard | Fires once per matching event, subject to the rule's suppression window |
rate | Fires when the count of matching events exceeds match_count_threshold within rate_window_secs |
Rate rule constraints
match_count_threshold: minimum 1rate_window_secs: 30 to 7200 secondsrate_group_by:sensor_id(count per sensor) orprocess_executable(count per binary)
Suppression window
Each rule has a suppression_window_secs field (default 300 seconds) that prevents the same rule from re-firing on the same sensor more frequently than the window allows.
This is separate from the global notification suppression window (default 14,400 seconds, configurable in Console Settings), which controls how often the Console sends email/webhook notifications for the same alert condition.
Auto-correlation
When auto_correlate: true is set on a rule, the Console automatically links new fired alerts to an open investigation for the same rule on the same sensor within correlation_window_secs (60 to 86,400 seconds, default 600). If no open investigation exists within the window, a new one is created.
Creating an alert rule
Creating an alert rule requires operator role and at least one configured webhook destination.
In the Console, navigate to Policies > Alert Rules and click New Rule. Fill in the rule name, event kind, severity, rule type, scope (sensor, group, namespace, cluster, or workload), MITRE technique and tactic, suppression window, auto-correlation settings, and webhook destinations, then save.
Scope options (only one applies at a time):
| Field | Scope |
|---|---|
sensor_id | Specific sensor |
group_id | Sensor group |
k8s_namespace | All sensors in a Kubernetes namespace |
k8s_cluster_name | All sensors in a cluster |
workload_name + workload_type | Specific workload |
| (none) | All sensors |
Alert rule preview
Before saving a new rule, click Preview in the rule editor. The Console shows how many events in the last 24 hours would have triggered the rule, along with a calibration note if the threshold was not exceeded in that window.
Enabling and disabling rules
In the Console, navigate to Policies > Alert Rules. Each rule row has an Enabled toggle. Click it to enable or disable the rule without deleting it.
Escalating to an investigation
From the alert detail view, operators can create or link an investigation. Investigations allow multi-alert case management with notes, evidence, and ownership.
See Investigations for the full workflow.
Suppression
To silence a specific alert pattern without deleting the rule, create a suppression rule from the alert detail or from Policies > Suppression.
See Suppression Rules for details.
Recommended triage workflow
- Filter the inbox to
status=newandseverity=critical. Address these first. - Review the event kind, sensor, process, and whether the alert belongs to a broader pattern (use Attack Chains view).
- Set status to
in_progressand assign to the responsible operator. - If the activity is expected, mark
resolved_false_positiveand add a note explaining the context. Consider adding a suppression rule if the pattern will recur. - If the activity is confirmed, mark
resolved_true_positive, link to an investigation, and initiate containment if needed.
::: note Marking an alert as false_positive or creating a suppression does not fix the underlying behavior. Review the policy pack, custom rules, or anomaly baseline that generated the alert if the same pattern recurs. :::