Appearance
Sensor Overview
The Telovix Sensor is the node-resident runtime that collects kernel-level activity from a Linux host and streams it to the Console over mTLS. It runs as a single self-contained binary (~89 MB) that embeds the eBPF engine, all BPF object files, and the sensor controller in one package. It requires no kernel patches, no kernel modules, and no external dependencies beyond a supported Linux kernel.
Where to start
| If you are... | Start here |
|---|---|
| Installing the sensor on a VM or bare-metal host for the first time | Sensor: VM / Bare Metal |
| Deploying to a Kubernetes cluster | Sensor: Kubernetes (Helm) |
| Choosing between standard and telecom flavor | Standard vs Telecom Flavor |
Diagnosing a sensor that shows stale, degraded, or offline | Health States and Heartbeat |
| Assigning a policy pack to a sensor | Policy Packs |
| Enabling enforcement to block or kill at the kernel level | Enforcement Mode |
| Writing a custom detection rule | Custom Detection Rules |
| Understanding what the sensor monitors (kprobes and LSM hooks) | Kprobes Reference and LSM Hooks Reference |
| Isolating a sensor during an incident | Containment and Isolation |
| Checking kernel requirements for BTF and BPF LSM | Requirements |
What the sensor does
The sensor attaches eBPF hooks at enrollment time and runs two concurrent loops in steady state:
- Heartbeat loop (every 15 seconds): reads events from the engine's JSONL export, enriches them, and sends a structured payload to the Console. The payload includes events, resource metrics, trust state, listening services, active connections, Kubernetes inventory, and (on the telecom flavor) telecom protocol reports.
- WebSocket stream loop (every 500 ms): maintains a persistent WebSocket connection to the Console for low-latency event delivery and receipt of operator-initiated policy pushes. The Console treats any sensor with an active WebSocket connection as
healthyregardless of the 90-second heartbeat staleness window.
If the Console is unreachable, the sensor spools events to /var/lib/telovix-sensor/events.jsonl (capped at 100,000 events). On reconnect, it drains up to 100 spooled events per heartbeat.
Sensor flavors
| Flavor | Binary name | What it collects |
|---|---|---|
standard | telovix-sensor | Process execution, file access, network connections, privilege changes, namespace creation, kernel module loads, ptrace, FIM, SBOM, Kubernetes workload visibility, behavioral anomaly scoring. |
telecom | telovix-sensor-telecom | Everything in standard, plus: 5G Core NF role detection (24 roles), O-RAN node detection, NGAP/F1AP/E1AP/XnAP/E2AP/PFCP/GTP-U/SCTP/Diameter/RADIUS/SIP/SBI-HTTP2/NAS5G protocol parsing and anomaly detection, NF SLO monitoring, TLS uprobe visibility (OpenSSL, Go TLS, BoringSSL), timing and synchronization monitoring, and telecom-specific attack chain detection. |
The flavor is selected at install time and embedded in the binary. Changing from standard to telecom (or back) requires re-installing the binary. Sensor identity (mTLS certificates, sensor ID, policy packs) is preserved across a flavor change.
eBPF event kinds
The sensor attaches kprobes, BPF LSM hooks, and uprobes. The following event kinds are produced:
| Event kind | Source | Severity | Description |
|---|---|---|---|
process_exec | sys_execve kprobe | info | Process execution with full argument list and ancestor chain |
process_exit | sys_exit_group kprobe | info | Process termination with exit code |
process_fork | sys_clone kprobe | info | Process or thread fork |
process_exec_burst | aggregation | warning | 5 or more execs from the same parent within 1 second |
process_fork_burst | aggregation | warning | 3 or more forks from the same parent within 1 second |
network_connect | tcp_connect kprobe | info | Outbound TCP or UDP connection |
network_accept | inet_csk_accept kprobe | info | Server-side TCP accept |
network_listen | inet_listen kprobe | warning | Socket placed in LISTEN state |
network_flow | tcp_close aggregation | info | Completed TCP flow with duration, state, and bytes sent |
dns_query | udp_sendmsg kprobe | info | DNS query on port 53, 853, or 5353; non-standard resolvers flagged |
dns_lookup | getaddrinfo uprobe | info | Hostname resolution at the glibc level |
dns_resolution | correlation | info | Hostname-to-IP correlation from getaddrinfo and tcp_connect |
file_open | openat kprobe | info | File read access to sensitive paths (credentials, SSH keys) |
file_write | sys_write VFS kprobe | critical | Write to sensitive system file |
fim_alert | FIM baseline comparison | critical | File hash, inode, or size mismatch against startup baseline |
ptrace | sys_ptrace kprobe | warning | Debugger or tracer attached |
privilege_change | sys_setuid kprobe | warning | UID or GID change |
namespace_create | sys_clone with CLONE_NEW* | warning | New Linux namespace created |
cap_change | cap_capable BPF LSM | warning | Capability mask change |
signal | sys_kill kprobe | warning | Signal sent between processes |
module_load | sys_init_module kprobe | high | Kernel module loaded |
bpf_object_get | sys_bpf kprobe | critical | Access to BPF map (potential sensor tampering) |
bpf_map_update | sys_bpf kprobe | critical | eBPF map modified (potential sensor disablement) |
Events are enriched with the full process ancestor chain, workload context (container ID, Kubernetes namespace, workload name and type), and network namespace ID. On the telecom flavor, events for detected NF processes include the NF role.
In-memory caches
The sensor maintains four in-memory caches to enrich events without reading /proc on every event:
| Cache | Key | Purpose | Capacity |
|---|---|---|---|
ProcessCache | exec_id | Ancestor chain, workload context, cgroup path, network namespace | 8,192 entries |
FlowTracker | socket cookie | TCP flow duration and final state | Evicts flows older than 1 hour |
DnsCorrelator | PID | Correlates getaddrinfo hostname lookups with subsequent tcp_connect events | 16 pending lookups per PID, 10-second window |
K8sCache | pod UID | Pod metadata, workload status, network policies, services, ingresses, pod security postures | Refreshed every 30 seconds |
The telecom flavor adds a TelecomNfInventory that maps running process IDs to detected NF roles, updated on each heartbeat as the sensor observes protocol activity. Detection uses port binding patterns, process name heuristics, binary path patterns, and gRPC service port detection.
File Integrity Monitoring
At startup, the sensor builds a SHA-256 baseline for files under the following path groups:
- System binaries and libraries:
/usr/bin,/bin,/lib,/lib64,/usr/lib,/usr/local/bin,/usr/local/lib - Credentials:
/etc/passwd,/etc/shadow,/etc/sudoers,/etc/sudoers.d,/etc/pam.d, PAM libraries - Dynamic linker:
/etc/ld.so.preload,/etc/ld.so.conf - SSH configuration:
/etc/ssh - Persistence paths:
/etc/cron.d,/etc/crontab, cron directories,/etc/rc.local,/etc/init.d,/etc/systemd/system,/usr/lib/systemd/system,/etc/profile.d,/etc/udev/rules.d - Authentication logs:
/var/log/auth.log,/var/log/secure,/var/log/audit/audit.log
On every file_write event, the sensor recomputes the SHA-256 hash and compares it to the baseline. A mismatch in hash, inode, or file size emits a fim_alert event with severity: critical.
Anomaly scoring
The sensor scores each event against per-binary behavioral baselines built over a 14-day learning window. During learning, scores are computed but alerts are suppressed. Full alerting activates automatically after 14 days or 100+ events for that binary.
final_score = max_signal × 0.65 + weighted_avg × 0.35
max_signal = max(spawn_score, net_score, file_score, args_score)
weighted_avg = spawn×0.40 + net×0.25 + file×0.15 + args×0.20High-signal events bypass the learning window: ptrace, fim_alert, module_load, bpf_map_update, and command-line arguments matching known shell injection patterns (scored 85 or above).
On-disk state
After enrollment, the sensor stores the following files under its state directory (default /var/lib/telovix-sensor):
| Path | Description |
|---|---|
client.key.pem | Active mTLS private key (mode 600) |
client.cert.pem | Active mTLS client certificate |
client.prev.key.pem | Previous key (kept during certificate renewal overlap window) |
client.prev.cert.pem | Previous certificate (kept during renewal overlap window) |
console-ca.cert.pem | Console CA trust anchor |
policy-signing.pub | Ed25519 public key for policy pack signature verification |
sensor-state.json | Enrollment state, sensor ID, trust state, certificate expiry |
assigned-pack.json | Current policy pack ID, version, enforcement state |
compiled-policies/ | Active TracingPolicy YAML files |
events.jsonl | On-disk event spool |
engine/v1.7.0/ | Extracted eBPF engine binaries and BPF object files |
The eBPF engine writes its own event stream to engine/events.jsonl. The sensor tails this file continuously.
Operational model
After enrollment, the sensor is operated entirely from the Console:
- Policy packs, enforcement rules, and custom detection rules are pushed via the heartbeat response
- Certificate renewal is triggered automatically by the Console and completed at the next heartbeat
- Binary upgrades are signaled via the heartbeat response and applied by the sensor itself (VM/bare-metal) or via
helm upgrade(Kubernetes) - Tags and sensor metadata can be updated from Sensors > [sensor] without touching the host
The sensor never opens an inbound port. All communication is outbound from the sensor to the Console.