Skip to content

Alert Inbox and Triage

The Alert Inbox collects every fired alert across the fleet in one place and gives operators the tools to triage, annotate, escalate to investigations, and trigger AI-assisted analysis without leaving the view.


Alert status lifecycle

Alerts move through the following statuses:

StatusMeaning
newAlert has fired and has not been reviewed
acknowledgedOperator has seen the alert
in_progressActively being investigated
resolved_true_positiveConfirmed real incident, resolved
resolved_false_positiveConfirmed false positive, resolved
resolvedResolved without a specific classification
false_positiveMarked as a false positive without full resolution flow
suppressedSuppressed by a suppression rule

Status updates require analyst role or higher.


Reading the alert list

Filters available:

ParameterDescription
statusOne of the statuses above
severitycritical, high, medium, or low
sensor_idFilter to a specific sensor
sourceAlert source (rule name or type)
pagePage number (1-indexed)
page_sizeResults per page (1 to 200, default 50)

In the Console, navigate to Alerts and use the Status and Severity filters in the filter bar to narrow the list. The badge in the navigation menu shows the current count of new-status alerts.


Updating alert status

Requires analyst role.

In the Console, open the alert in Alerts and click Update Status. Select the new status from the dropdown, optionally set an assignee, and choose a resolution classification if closing the alert.

resolution values: true_positive, false_positive, benign

The assignee field records who is responsible for the alert. The resolution field is optional; use it when closing the alert with a specific classification.


Adding triage notes

Notes are timestamped and attributed to the operator who added them. Maximum 2,000 characters per note.

Requires analyst role.

In the Console, open the alert in Alerts. The Notes panel is at the bottom of the detail view. Click Add Note, type the note text, and submit. All notes for the alert are listed in the same panel in chronological order.


AI triage

When the LLM provider is configured, the Console automatically runs L1 and L2 triage on new alerts. L1 produces a fast severity verdict, a false-positive likelihood flag, and a flag indicating whether L2 analysis is recommended. L2 produces a narrative, recommended actions, and MITRE ATT&CK technique mapping.

To re-run AI triage on an existing alert, open the alert in Alerts and click Re-analyze (requires operator role). The button is disabled if no LLM provider is configured. See AI Assistant for LLM configuration.


Export

In the Console, navigate to Alerts and apply the desired status and severity filters. Click Export CSV to download up to 10,000 matching alerts as a CSV file, suitable for SIEM import or offline analysis.


Alert rules

Alert rules define when a fired alert is created. Each rule targets a specific event kind and fires when matching events arrive.

Rule types

TypeBehavior
standardFires once per matching event, subject to the rule's suppression window
rateFires when the count of matching events exceeds match_count_threshold within rate_window_secs

Rate rule constraints

  • match_count_threshold: minimum 1
  • rate_window_secs: 30 to 7200 seconds
  • rate_group_by: sensor_id (count per sensor) or process_executable (count per binary)

Suppression window

Each rule has a suppression_window_secs field (default 300 seconds) that prevents the same rule from re-firing on the same sensor more frequently than the window allows.

This is separate from the global notification suppression window (default 14,400 seconds, configurable in Console Settings), which controls how often the Console sends email/webhook notifications for the same alert condition.

Auto-correlation

When auto_correlate: true is set on a rule, the Console automatically links new fired alerts to an open investigation for the same rule on the same sensor within correlation_window_secs (60 to 86,400 seconds, default 600). If no open investigation exists within the window, a new one is created.

Creating an alert rule

Creating an alert rule requires operator role and at least one configured webhook destination.

In the Console, navigate to Policies > Alert Rules and click New Rule. Fill in the rule name, event kind, severity, rule type, scope (sensor, group, namespace, cluster, or workload), MITRE technique and tactic, suppression window, auto-correlation settings, and webhook destinations, then save.

Scope options (only one applies at a time):

FieldScope
sensor_idSpecific sensor
group_idSensor group
k8s_namespaceAll sensors in a Kubernetes namespace
k8s_cluster_nameAll sensors in a cluster
workload_name + workload_typeSpecific workload
(none)All sensors

Alert rule preview

Before saving a new rule, click Preview in the rule editor. The Console shows how many events in the last 24 hours would have triggered the rule, along with a calibration note if the threshold was not exceeded in that window.

Enabling and disabling rules

In the Console, navigate to Policies > Alert Rules. Each rule row has an Enabled toggle. Click it to enable or disable the rule without deleting it.


Escalating to an investigation

From the alert detail view, operators can create or link an investigation. Investigations allow multi-alert case management with notes, evidence, and ownership.

See Investigations for the full workflow.


Suppression

To silence a specific alert pattern without deleting the rule, create a suppression rule from the alert detail or from Policies > Suppression.

See Suppression Rules for details.


  1. Filter the inbox to status=new and severity=critical. Address these first.
  2. Review the event kind, sensor, process, and whether the alert belongs to a broader pattern (use Attack Chains view).
  3. Set status to in_progress and assign to the responsible operator.
  4. If the activity is expected, mark resolved_false_positive and add a note explaining the context. Consider adding a suppression rule if the pattern will recur.
  5. If the activity is confirmed, mark resolved_true_positive, link to an investigation, and initiate containment if needed.

::: note Marking an alert as false_positive or creating a suppression does not fix the underlying behavior. Review the policy pack, custom rules, or anomaly baseline that generated the alert if the same pattern recurs. :::


Further reading

Released under the Telovix Commercial License.