Appearance
Behavioral Analytics and Anomalies
The Telovix Console builds per-binary process baselines from 14 days of event history and scores each incoming event against its binary's baseline. A score above the threshold produces an anomaly record that operators can review, suppress, or escalate to an investigation.
This layer is most useful for detecting novel behavior from established processes: a known binary contacting a new destination, an NF spawning an unexpected child, or a process that never runs at night suddenly running at 2am.
How baselines are built
The Telovix Console queries the last 14 days of events from ClickHouse and builds a profile per binary path per sensor covering:
| Profile dimension | What it tracks |
|---|---|
spawn_profile | Which child processes this binary has spawned (parent - child counts) |
net_profile | Which network destinations this binary has connected to (host:port counts) |
file_profile | Which file path prefixes this binary has accessed (prefix counts) |
args_profile | First 80 characters of command-line arguments (argument pattern counts) |
inbound_profile | Which local ports this binary has accepted inbound connections on |
write_profile | File write path prefixes |
time_profile | 24-bucket UTC hourly histogram of when this binary runs |
Baselines are rebuilt automatically every 2 hours. To trigger a manual rebuild, navigate to Behavioral Analytics in the Console and click Rebuild Baselines Now (requires operator role).
Learning mode
A binary enters learning mode when its baseline has fewer than 14 days of data and fewer than 100 events. During learning mode, anomaly scores are computed and stored but alerts are suppressed so operators are not flooded with noise from newly enrolled sensors.
Learning mode exits when either condition is met:
- 14 days of event history have accumulated for the binary on that sensor
- The event count reaches 100 (early graduation for high-traffic binaries)
The minimum event count to produce any score is 10. Binaries with fewer than 10 events return a score of 0 regardless of what they do.
Scoring formula
Each event is scored across four independent dimensions, then combined:
final_score = max_signal × 0.65 + weighted_avg × 0.35
max_signal = max(spawn_score, net_score, file_score, args_score)
weighted_avg = spawn × 0.40 + net × 0.25 + file × 0.15 + args × 0.20Scores range from 0 to 100. The scorer requires a minimum baseline of 10 events before producing any non-zero score.
Per-dimension signal scores
Each dimension returns one of four values:
| Signal score | Condition |
|---|---|
| 0 | Normal: pattern is well-represented in the baseline |
| 35 | Rare: pattern appears at < 2% frequency in the baseline (requires ≥ 200 baseline events) |
| 60 | Very rare: pattern appears 2 times or fewer in the baseline (requires ≥ 50 baseline events) |
| 85 | Never seen: pattern has not appeared in the baseline (requires ≥ 10 baseline events) |
Spawn scoring
Compares the (parent, child) binary pair against the parent's spawn_profile. Returns the signal score for how often the parent has previously spawned this child.
Network scoring
Compares the (binary, host:port) pair against the binary's net_profile. The following connections always return score 0 regardless of baseline:
- Loopback addresses (
127.x,::1,localhost) - Connections from known Kubernetes infrastructure binaries (
etcd,kubelet,containerd, CoreDNS, Cilium, etc.) to private RFC 1918 IP addresses
File scoring
Compares the accessed path prefix (first two path components) against the binary's file_profile.
Args scoring
Checks the command-line arguments for patterns associated with reverse shells, download-and-execute, and obfuscation. Returns a tiered score based on confidence:
| Tier | Score | Examples |
|---|---|---|
| Tier 1: Definitive reverse shell | 95 | /dev/tcp/ bash socket redirection, pty.spawn, os.dup2, socat EXEC:, Java ScriptEngineManager |
| Tier 2: Strong indicators | 90 | nc -e /bin/sh, busybox nc, mkfifo /tmp/, Perl IO::Socket, openssl s_client -connect |
| Tier 3: Download-and-execute | 88 | Pipe to bash/sh/python (` |
| Tier 4: Suspicious shell patterns | 80 | Interactive shell flags (-i), Python import socket,subprocess, PHP fsockopen, socat TCP: |
| Tier 5: Obfuscation indicators | 70 | base64 -d in pipeline, Python/Perl one-liners via -c/-e |
| Scripting + raw IPv4 | 65 | Python/PHP/Perl/Ruby/Node/Java invoked with a bare IPv4 address in arguments |
nc -z (port scan / health probe mode without -e) always returns args score 0.
Temporal scoring
The time profile is a 24-bucket UTC hourly histogram. If an event fires in an hour where the baseline shows zero activity (and the binary has been active in at least 3 other hours with ≥ 50 total events), the temporal score fires.
Temporal score is capped at 45 as a standalone signal. It combines with other dimension scores rather than dominating them. The final effect is that unusual timing amplifies existing signals but cannot alone produce a high-risk anomaly record.
Baseline drift detection
Every 2 hours the Console computes the Jaccard distance between the current baseline profiles and a rolling comparison snapshot for each binary. If the spawn or network profile has drifted by more than 30%, a baseline_drift event is produced indicating that the binary's behavioral patterns have materially changed.
Drift thresholds:
| Drift level | Drift score assigned |
|---|---|
| 30–49% (spawn or net) | 50 |
| 50–69% | 65 |
| 70%+ | 80 |
MITRE ATT&CK mappings
Anomaly records include MITRE ATT&CK technique IDs derived from the event kind:
| Event kind | MITRE techniques |
|---|---|
ptrace | T1055 |
mmap_exec | T1055, T1620 |
privilege_change | T1548 |
cap_change | T1548.001 |
namespace_create, chroot | T1611 |
kernel_module_load, kmod_load | T1014, T1547.006 |
dns_query | T1071.004 |
network_connect | T1071 |
network_flow | T1048 |
file_open | T1083 |
file_write | T1565.001 |
fim_alert | T1565.001, T1195.002 |
signal | T1562.001 |
process_exec | T1059 |
Querying anomaly scores
In the Console, navigate to Behavioral Analytics to see anomaly scores. Use the filter bar to set a minimum score threshold, lookback window, and whether to include suppressed records.
Filter parameters:
| Parameter | Default | Range | Description |
|---|---|---|---|
min_score | 30 | 0–100 | Minimum anomaly score to return |
since_hours | 24 | 1–720 | Lookback window in hours (720 = 30 days) |
show_suppressed | false | Include suppressed records | |
limit | (bounded) | Maximum records to return |
Per-sensor anomaly threshold
Each sensor has an anomaly_min_score setting that controls the minimum score for alerts on that sensor. The default is the Console-wide threshold. You can lower it on high-value sensors or raise it on noisy development nodes.
In the Console, open the sensor's detail page and navigate to the Anomaly Settings tab. The current threshold is shown alongside the Console-wide default. Edit the value (valid range: 0–100) and save. Requires operator role.
Escalating an anomaly to an investigation
From Behavioral Analytics, click Investigate on any score record. The Console creates an investigation with the binary name and event kind as the title, and populates the initial evidence with the anomaly reasons and MITRE mappings.
Baseline maturity view
In the Console, navigate to Behavioral Analytics > Baseline Maturity to see learning status and event count per binary per sensor. Use this view to understand how far along each baseline is before interpreting anomaly scores.
Limitations
- Baselines are per-sensor and per-binary path. Two sensors running the same binary build independent baselines.
- The first 14 days after enrolling a sensor produce no anomaly alerts (learning mode). Plan pilot observation periods accordingly.
- Args scoring looks for pattern strings in the event message and argument fields. It does not execute or parse the arguments.
- Network scoring does not distinguish between destinations that look similar (e.g. two different services on the same IP). The profile key is normalized to
host:port. - Temporal scoring requires the binary to have been observed in at least 3 distinct hours. Services that run continuously will not produce temporal alerts.