Roadmap

What's stable now, what's coming next, and what we're exploring. Build on what's here — and know what's ahead.

🟢 In progress 🔵 Planned ⚪ Exploring
Next up — the agent becomes self-aware
IN PROGRESS

Agent Log Read Tool

The agent gains the ability to read its own access logs — what it sensed, inferred, and did. Self-referential access enables agents that learn from their own embodied behavior and self-correct across sessions.

For builders: Adds a logs/read tool returning structured log entries with timestamps, capability type (sensor/inference/tool), and payloads. Filterable by time range and capability. If you're building agents with memory or self-reflection, this is the primitive.

logs/read
Expanding the body
PLANNED

Bluetooth Sensor Pairing

Extends the agent's sensory surface beyond the phone to any BLE-capable device — heart rate monitors, smart rings, smart glasses, environmental sensors. Each paired device adds new resources to the MCP capability set, discovered automatically through negotiation.

For builders: New resources will appear under vagus://bluetooth/*, dynamically registered as devices pair. You don't need to know the device type in advance — your agent discovers capabilities the same way it discovers phone sensors today. Starting with heart rate and generic sensor profiles, expanding from there.

vagus://bluetooth/*
PLANNED

External Inference Pipelines

The inference layer becomes pluggable. Today all inference is on-device heuristics — fast and private, but limited. External pipelines let the inference layer call cloud ML models, custom classifiers, or user-hosted services. The agent's understanding becomes extensible without changing the app.

For builders: Inference resources keep the same MCP interface — your agent code doesn't change. What changes is the pipeline behind the resource: local heuristic, cloud API, or your own model. A pipeline spec and registration mechanism will be provided. If you're training custom models on physiological or environmental data, this is where they plug in.

Deeper governance
PLANNED

Granular Inference Governance

Individual inference channels already have on/off toggles. This extends the full governance stack — rate limits, time-of-day windows, approval prompts, and per-channel audit logs — to each inference resource. The same depth of control that exists for sensors and I/O tools, applied to the meaning layer.

For builders: Each inference resource (attention_availability, sleep_likelihood, indoor_confidence, notification_timing) gets its own rate limit, time-of-day window, ask-each-time option, and audit stream. You'll be able to control not just which inferences the agent can make, but how often, when, and with full accountability.

Exploring — opening the sensor layer
EXPLORING

Sensor API Export

Bridge phone sensors to callable Web Sensor APIs. External services — your own inference pipelines, logging dashboards, research tools — can subscribe to VAGUS sensor streams over standard web APIs, parallel to the MCP channel.

For builders: Right now sensor data flows exclusively through MCP to the connected agent. This creates a parallel HTTP/WebSocket path for the same streams. If you're running your own inference models, logging sensor data for research, or building multi-consumer architectures, you'll be able to tap phone sensors directly without routing through the agent. The phone becomes a general-purpose sensor hub.

EXPLORING

External Web Sensor API Ingestion

Connect external web sensor APIs — weather services, air quality feeds, smart home platforms, health APIs — to MCP so agents can access them as standard VAGUS resources. The agent's sensory world expands beyond what's physically on the device.

For builders: Register any REST or WebSocket data source as a VAGUS resource. A weather API becomes vagus://external/weather. A smart home hub becomes vagus://external/home_temperature. Your agent discovers and reads them the same way it reads the accelerometer — through MCP capability negotiation.

vagus://external/*