L3PTP REALITY Tunnel Roadmap
Last updated: 2026-05-10
Purpose
Build a point-to-point Layer 3 tunnel for NexusNet using the existing REALITY and QUIC REALITY work. The first target is a practical private L3 link between two NexusNet agents:
host stack
-> TUN interface
-> NexusNet L3PTP data plane
-> QUIC REALITY DATAGRAM
-> peer NexusNet agent
-> peer TUN interface
-> peer host stackThis roadmap is the durable project memory for L3 tunnel work across frontend, backend, and agent. Update it whenever schema, protocol, packet format, performance goals, or validation results change.
Long-term kernel data channel offload planning lives in quic-reality-kernel-dco.md. The first L3PTP implementation remains user-space, but its ABI and session/key boundaries should stay DCO-ready.
Definitions
- L3PTP: NexusNet point-to-point Layer 3 tunnel profile.
- TUN: Layer 3 virtual network interface that reads and writes IP packets.
- Data plane: IP packets moving between TUN and peer agent.
- Control plane: tunnel setup, credentials, route advertisement, MTU, keepalive, status, and capability negotiation.
- Internal mode: NexusNet-native L3PTP framing over QUIC REALITY.
- CONNECT-IP mode: standards-oriented MASQUE/HTTP/3-compatible IP proxy mode.
Key Decision
Use QUIC REALITY DATAGRAM for L3 packet data and use a reliable stream/control channel for setup and state.
Reasoning:
- IP packets are packet-oriented. Preserving packet boundaries matters.
- QUIC DATAGRAM is unreliable and message-oriented, matching IP tunnel semantics.
- STREAM is reliable and ordered. It is suitable for handshake, route updates, capability negotiation, and statistics, but it is the wrong primitive for raw IP packet forwarding because head-of-line blocking would damage UDP and tunnel behavior.
- CONNECT-IP and MASQUE also put proxied IP packet payloads on HTTP Datagrams over QUIC DATAGRAM, while control metadata is carried reliably.
Standards and Ecosystem References
- RFC 9221 defines QUIC DATAGRAM frames as an unreliable datagram extension to QUIC.
- RFC 9297 defines HTTP Datagrams and the Capsule Protocol.
- RFC 9484 defines CONNECT-IP, a protocol for proxying IP packets in HTTP.
tun-rsis the preferred Rust TUN/TAP crate for the first implementation. It provides sync and async APIs, Tokio integration, cross-platform support, Linux multi-queue, and Linux offload hooks.tunis a reasonable simpler alternative with async support, but it is less attractive for the long-term high-throughput Linux path.tokio-tunis Linux-focused and simple, but it gives us fewer long-term performance and cross-platform options thantun-rs.
Current crate decision:
Primary: tun-rs with async feature
Fallback: abstract TunDevice trait so we can swap to tun or tokio-tun if needed
Initial platform: Linux
Future platforms: macOS utun, Windows Wintun, mobile borrowed-fd modeNon-Negotiable Protocol Goals
- Do not put IP packets on a reliable QUIC STREAM data path.
- Preserve IP packet boundaries from TUN read to peer TUN write.
- Keep WAN MTU safe. The default tunnel MTU must fit QUIC DATAGRAM payloads without depending on IP fragmentation.
- Do not make L3PTP depend on root-only tests for normal build verification.
- Make the route and address model explicit in backend/frontend before exposing a one-click default route UI.
- Keep internal L3PTP and future CONNECT-IP compatibility separated by protocol mode, not hidden conditionals.
Initial Scope
The first production-quality target supports:
- Point-to-point L3 tunnel between two agents.
- IPv4 payloads first.
- Optional IPv6 payload validation and forwarding once IPv4 is stable.
- One local TUN device per configured L3 network per agent.
- Static tunnel addresses, static routes, and explicit peer node selection.
- QUIC REALITY DATAGRAM as the outer transport.
- QUIC REALITY STREAM or existing control config for tunnel setup/control.
- Frontend L3 network creation and deployment.
- Backend DB/API/proto config delivery to agent.
- Agent metrics for packets, bytes, drops, queue depth, and MTU errors.
Defer until after the first stable L3PTP target:
- Full mesh L3 routing.
- Dynamic routing protocols over the tunnel.
- CONNECT-IP wire compatibility.
- Multipath and failover across multiple QUIC REALITY peer links.
- TAP/Layer 2 bridging.
- Mobile VPN integration.
- Kernel eBPF/XDP acceleration.
- Transparent system default-route takeover.
- Kernel QUIC REALITY DCO.
Architecture
Frontend
Add an L3 Networks section separate from L4 gateway chains.
Core fields:
- Name.
- Enabled flag.
- Local node.
- Peer node.
- Interface name, for example
nn-l3-<short-id>. - Address family: IPv4 initially, dual-stack later.
- Local tunnel IP.
- Peer tunnel IP.
- Prefix length.
- MTU.
- Advertised routes.
- Allowed peer source prefixes.
- Transport profile:
payload=ip,outer=quic,security=reality. - REALITY credential selection.
- Camouflage target host and port inherited from credential or override.
- Safety toggles:
- install routes
- allow default route
- route metric
- kill switch
- DNS override, future only
Display fields:
- Agent apply status.
- TUN interface status.
- Current MTU.
- Peer handshake status.
- RX/TX packets and bytes.
- Drop counters by reason.
- Last route apply error.
Backend
Add persistent L3 tunnel entities without overloading L4 route tables.
Suggested model:
l3_networks
id
name
enabled
mode # point_to_point first
address_family # ipv4, ipv6, dual_stack
mtu
transport_profile
reality_credential_id
created_at
updated_at
l3_endpoints
id
l3_network_id
node_id
role # left, right, hub, spoke later
interface_name
tunnel_ipv4
tunnel_ipv6
install_routes
allow_default_route
route_metric
enabled
l3_routes
id
l3_network_id
endpoint_id
cidr
direction # advertise, install, allowed_source
enabledAgent config should be delivered as explicit L3TunnelRule entries instead of being inferred from legacy TCP/UDP gateway rules.
Agent
Add a dedicated L3 tunnel runner:
src/l3/
mod.rs
config.rs
tun.rs
packet.rs
protocol.rs
control.rs
dataplane.rs
metrics.rs
route.rs
tests.rsAlternative if we keep all data planes under gateway initially:
src/gateway/l3/Preferred direction:
- Put L3-specific code under
src/l3. - Reuse
gateway::quiconly for QUIC REALITY session and DATAGRAM send/receive primitives. - Do not make TUN handling a subfeature of the UDP gateway. TUN is an L3 data plane and has different safety, MTU, route, and privilege behavior.
Protocol Design
Control Plane
Use a reliable channel for setup. First implementation can use backend-pushed agent config plus a peer control STREAM after QUIC REALITY handshake.
Control messages:
L3PTP_HELLO
protocol_version
network_id
endpoint_id
node_id
supported_features
max_datagram_payload
requested_mtu
supported_address_families
L3PTP_READY
accepted_mtu
peer_tunnel_addresses
peer_allowed_sources
peer_advertised_routes
datagram_context_id
L3PTP_ROUTE_UPDATE
add_routes
remove_routes
route_epoch
L3PTP_KEEPALIVE
monotonic_time
rx_packets
tx_packets
drop_counters
L3PTP_CLOSE
code
reasonThe first version can encode these as compact JSON or protobuf on an internal QUIC STREAM. Once stable, move to a typed protobuf shared by backend and agent.
Data Plane
Each TUN read produces one IP packet. Each IP packet is carried in exactly one QUIC DATAGRAM when it fits the negotiated budget.
Initial internal DATAGRAM payload:
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| type=0x03 | version=0x01 | header_len |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| datagram_context_id |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| packet_sequence |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| flags | ip_version | payload_len |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| |
| raw IP packet |
| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+Rules:
type=0x03means L3PTP IP packet.datagram_context_idbinds the packet to an L3 network/endpoint.packet_sequenceis for metrics, loss accounting, and debugging only.ip_versionmust match the first nibble of the IP packet.payload_lenmust equal the IP packet length.- The receiver validates the IP header length, total length, and source prefix before writing to TUN.
- Do not retransmit lost DATAGRAM payloads in the QUIC layer.
Future CONNECT-IP mode should not use this internal header on the wire. It should map IP packet payloads to HTTP Datagram contexts according to the standards path.
MTU Policy
Use an IPv6-safe interface default:
Default tunnel MTU: 1280
Configurable range: 576 to 9000
Hard maximum: negotiated QUIC DATAGRAM payload budgetThe default is 1280 because IPv6 requires every link to support an MTU of at least 1280. The tunnel must still reject or count oversized packets when the negotiated QUIC DATAGRAM payload budget cannot carry the configured inner MTU.
Later:
- Add path MTU probing.
- Add GSO/GRO for Linux local high-throughput mode.
- Add ICMP Packet Too Big generation for routed payloads when possible.
Route and Safety Model
The first L3PTP release must be explicit and conservative:
- Never install a default route unless
allow_default_route=true. - Only accept packets whose source address matches configured local prefixes.
- Only write peer packets to TUN if the source is allowed for that peer.
- Reject route overlap in backend validation unless explicitly overridden.
- Track route epochs so stale config does not race with fresh config.
- On tunnel shutdown, remove only routes installed by NexusNet.
- Use route comments/metadata where the OS allows it.
- Add a kill-switch option only after route cleanup is reliable.
Phased Delivery Plan
Phase 0: Finalize L3 Domain Model
Deliverables:
- Add product-level L3 vocabulary:
- L3 network.
- L3 endpoint.
- L3 route.
- L3 tunnel credential binding.
- L3 transport profile.
- Add
payload=ipto the transport composition model. - Decide initial UI copy and field validation rules.
- Define an explicit "planned but not supported on this agent" state.
- Document privilege requirements:
- Linux needs
CAP_NET_ADMINor root for TUN and route changes. - Docker tests need
--cap-add NET_ADMINor privileged mode.
- Linux needs
Acceptance:
- Frontend, backend, and agent all use the same terms.
- No L3 configuration is represented as a fake TCP or UDP route.
Phase 1: TUN Crate Integration and Abstraction
Deliverables:
- Add
tun-rsbehind an agent feature, for examplel3-tun. - Create a
TunDevicetrait:name()mtu()recv_packet()send_packet()set_up()add_address()add_route()remove_route()close()
- Implement Linux using
tun-rs. - Add a fake in-memory TUN implementation for unit tests.
- Keep OS route operations in a separate
route.rsmodule so Windows/macOS can be added without touching the data path.
Acceptance:
- Agent builds without TUN support by default if the feature is disabled.
- Agent builds with
tun-rsTUN support. Linux network route mutation is the first implemented platform configuration backend. - Unit tests cover fake TUN read/write and route config validation without root.
Phase 2: L3PTP Packet Format and Validation
Deliverables:
- Implement internal L3PTP DATAGRAM encoder/decoder.
- Add zero-copy borrowed parse for incoming packet headers.
- Validate IPv4:
- version.
- header length.
- total length.
- checksum policy.
- source prefix.
- destination prefix.
- Add IPv6 parser gate:
- either reject cleanly with a counter or forward after basic version/length checks.
- Add counters for:
- malformed header.
- unsupported IP version.
- source prefix reject.
- oversized packet.
- TUN write error.
- QUIC DATAGRAM send error.
Acceptance:
- Pure unit tests parse and reject malformed packets.
- No root or live network is needed for these tests.
Phase 3: Agent Single-Link Local POC
Deliverables:
- Start one L3PTP tunnel from static local config.
- Create TUN interface.
- Assign local tunnel IP.
- Install peer route if enabled.
- Connect to peer over QUIC REALITY.
- Open control STREAM and exchange
HELLO/READY. - TUN read loop sends IP packets as QUIC REALITY DATAGRAM.
- QUIC REALITY DATAGRAM receive loop writes IP packets to TUN.
Threading model:
task A: TUN read -> validation -> bounded tx queue -> QUIC DATAGRAM send
task B: QUIC DATAGRAM recv -> validation -> bounded rx queue -> TUN write
task C: control stream -> keepalive, route epoch, close
task D: metrics flush -> control planeAcceptance:
- Two local agents can ping over two TUN interfaces.
- Route cleanup runs on normal shutdown.
- Drop counters explain all rejected packets.
Phase 4: Backend DB/API/Proto
Deliverables:
- Add DB migrations for L3 networks, endpoints, and routes.
- Add REST APIs:
- list/create/update/delete L3 networks.
- attach endpoints.
- add/remove routes.
- enable/disable network.
- fetch tunnel status and metrics.
- Add protobuf messages to agent config:
L3TunnelRuleL3EndpointConfigL3RouteConfigL3TunConfigL3RealityTransportConfig
- Add backend validation:
- no missing peer.
- no duplicate tunnel IP in one network.
- MTU within safe bounds.
- default route requires explicit flag.
- selected credential has REALITY/QUIC REALITY material.
Acceptance:
- Config push can deploy an L3 tunnel rule to selected agents.
- Unsupported agents reject L3 rules with a clear status instead of crashing.
Phase 5: Frontend L3 Network UI
Deliverables:
- Add dashboard navigation item for L3 Networks.
- Add L3 network table:
- name.
- enabled.
- local node.
- peer node.
- tunnel CIDR.
- MTU.
- transport.
- status.
- RX/TX.
- Add create/edit dialog:
- endpoint selector.
- tunnel addresses.
- routes.
- MTU.
- transport profile.
- credential selector.
- default-route safety confirmation.
- Add status drawer:
- per-agent apply result.
- TUN interface state.
- route install state.
- QUIC REALITY handshake state.
- loss/drop counters.
Acceptance:
- User can create a point-to-point L3 network without touching raw JSON.
- TCP and UDP L4 transport choices remain separate from L3 network choices.
- L3 screens do not overload existing L4 chain UI.
Phase 6: Integration Tests
Deliverables:
- Add ignored privileged tests for Linux:
- create two network namespaces.
- run two agents or two in-process tunnel endpoints.
- create two TUN devices.
- ping across tunnel.
- run TCP iperf3 across tunnel.
- run UDP iperf3 across tunnel.
- verify route cleanup.
- Add non-privileged protocol tests:
- fake TUN.
- fake QUIC DATAGRAM channel.
- route validation.
- packet validation.
Acceptance:
- Normal CI can run non-privileged tests.
- A developer can run one privileged command to verify real L3 dataplane.
Phase 7: Performance Baseline
Deliverables:
- Add L3 tunnel benchmark command:
TUN -> QUIC REALITY DATAGRAM -> TUN -> iperf3- Record:
- TUN RX/TX packets.
- QUIC DATAGRAM packets.
- TUN write drops.
- queue drops.
- MTU drops.
- peer sequence loss.
- CPU usage.
- per-task queue depth.
- Compare against current UDP-over-QUIC REALITY DATAGRAM baseline.
Initial performance goals:
- L3 ping works with stable latency on loopback.
- TCP iperf over L3PTP reaches at least the current QUIC REALITY STREAM baseline directionally.
- UDP iperf over L3PTP reaches at least the current DATAGRAM baseline directionally.
- 1G/1100B local UDP over L3PTP is stable before chasing 3G.
Optimization backlog:
- Linux TUN multi-queue.
tun-rsrecv_multiple/send_multiple.- Buffer pool using
bytes::BytesMut. - Per-flow queue sharding.
- QUIC DATAGRAM connection striping.
- UDP socket
sendmmsg/recvmmsgrevisit after counters. - Linux GSO/GRO for local high-throughput profile.
- CPU affinity and runtime worker separation for TUN and QUIC packet loops.
Phase 8: CONNECT-IP Compatibility Track
Deliverables:
- Add
l3_mode=internal_l3ptp | connect_ip. - Implement HTTP/3 CONNECT-IP control semantics separately from internal L3PTP.
- Map IP payloads to HTTP Datagram contexts.
- Add Capsule Protocol handling for control metadata.
- Keep REALITY handshake/camouflage at QUIC/TLS layer.
Acceptance:
- Internal L3PTP remains fast and simple.
- CONNECT-IP mode can be tested against a standards-oriented peer later.
- No hidden compatibility claim is made until wire-level interop is tested.
Phase 9: Production Hardening
Deliverables:
- Agent restart reconciliation:
- detect existing NexusNet TUN interfaces.
- clean stale routes.
- reuse or recreate interface safely.
- Config rollback:
- if route install fails, tear down the tunnel.
- if QUIC handshake fails, keep route disabled unless fail-open is configured.
- Metrics and alerts:
- tunnel up/down.
- peer RTT.
- last handshake error.
- packet loss from sequence gaps.
- queue drops.
- MTU drops.
- Security:
- anti-spoofing.
- CIDR allowlist.
- credential rotation.
- peer identity binding.
- audit log for route/default-route changes.
Acceptance:
- Repeated enable/disable cycles do not leave stale routes.
- A bad peer cannot inject packets outside its allowed source prefixes.
- Operators can understand tunnel failure from UI and logs.
Immediate Implementation Order
- Add
payload=ipand L3 tunnel model to shared docs/proto planning. - Add backend schema and API skeleton for L3 networks.
- Add frontend L3 Networks page with disabled deployment state until agent support lands.
- Add agent
src/l3module with fake TUN and packet codec tests. - Add
tun-rsLinux implementation behind a feature. - Wire static local L3PTP config to QUIC REALITY DATAGRAM.
- Add ignored privileged namespace ping test.
- Add backend config push to deploy
L3TunnelRule. - Enable frontend deployment and status display.
- Add performance tests and begin multi-queue/batch optimization.
Acceptance Milestones
M1: Packet codec
- L3PTP DATAGRAM encoder/decoder unit tests pass.
- IPv4 validation rejects malformed packets.
M2: Local fake tunnel
- Fake TUN endpoints exchange IP-like packets over fake DATAGRAM channel.
- Counters work without root.
M3: Real Linux TUN ping
- Two local TUN devices can ping through QUIC REALITY DATAGRAM.
- MTU and route cleanup verified.
M4: Backend config
- Backend persists L3 network config and pushes it to two agents.
- Agent reports apply status.
M5: Frontend workflow
- User creates L3 point-to-point network in UI.
- UI displays up/down and packet counters.
M6: Performance baseline
- TCP and UDP iperf3 over L3PTP are recorded.
- Bottleneck counters identify whether TUN, QUIC, route, or queue is limiting.
M7: CONNECT-IP track
- Internal L3PTP remains default.
- CONNECT-IP mode has a separate compatibility plan and packet mapping tests.
Open Questions
- Should the first UI require both endpoints to be NexusNet-managed nodes, or allow one unmanaged peer later?
- Should backend allocate tunnel IPs automatically from a configured pool?
- Should route installation be done by the agent directly or delegated to a privileged helper binary?
- Should default-route mode require a separate confirmation and audit event?
- Should L3PTP use one QUIC REALITY connection per L3 network, or share a pooled QUIC REALITY connection with TCP/UDP L4 streams later?
Current default answers:
- Managed node pairs only.
- Manual IP entry first; allocation pool later.
- Agent direct route install first, helper binary later if permissions become painful.
- Default route requires explicit confirmation and audit.
- One QUIC REALITY connection per L3 network for the first version; pool later.
Implementation Log
2026-05-10: Add deployable L3 network config skeleton
Added the first deployable control-plane path for L3PTP planning:
- Added
payload=ipto the transport profile/proto vocabulary. - Added backend
l3_networks,l3_endpoints, andl3_routestables. - Added REST CRUD and deploy endpoints at
/api/l3-networks. - Added
L3TunnelConfig/L3TunnelRuleprotobuf config updates separate from L4PingoraConfig, so L3 deployment does not overwrite L4 forwarding rules. - Added backend config push and gRPC replay for existing L3 networks when an agent reconnects.
- Added frontend L3 Networks page for point-to-point IPv4 tunnel creation and manual deployment.
- Added agent
src/l3skeleton with config validation, fake TUN abstraction, and internal L3PTP DATAGRAM packet encode/decode tests.
Validation:
cargo checkpasses innexus-backend.cargo checkpasses innexus-agent.cargo test -p nexus-agent l3 --libpasses.npm run lintandnpm run buildpass innexus-frontend.
Remaining work: this is deployable configuration plumbing and packet-format coverage. Real Linux TUN creation, route mutation, QUIC REALITY DATAGRAM data plane wiring, privileged namespace ping tests, and live metrics are still pending.
2026-05-10: Add optional Linux TUN and route preparation
Advanced the agent-side L3 implementation toward Phase 1/3:
- Added optional agent feature
l3-tunusingtun-rs2.8.3 with Tokio async support. - Added
TunRsDevicebehindl3-tun. It usestun-rsas the cross-platform TUN data-plane backend for supported desktop/server platforms. - Split OS network mutation into a
NetworkConfiguratorlayer. Linux route mutation is implemented first; other platforms now have an explicit unsupported boundary instead of being mixed into TUN packet I/O. - Added
route.rswith validated Linux route add/delete preparation. Runtime route mutation now uses netlink/system APIs rather than shell commands. - Extended
L3TunnelManagerto convertL3TunnelRuleinto a prepared tunnel plan containing TUN config, peer IPv4, install routes, allowed source CIDRs, and DATAGRAM context ID. - Added an ignored privileged test,
l3::tests::privileged_creates_linux_tun_device, for real CAP_NET_ADMIN/root TUN creation.
Validation:
cargo test -p nexus-agent l3 --libpasses.cargo test -p nexus-agent l3 --lib --features l3-tunpasses, with the real TUN test ignored by default.
Remaining work: wire prepared TUN devices into the QUIC REALITY DATAGRAM data plane, add route cleanup/reconciliation around config replacement, add fake DATAGRAM tunnel tests, and run the ignored privileged Linux TUN test under sudo or CAP_NET_ADMIN.
2026-05-10: Add non-privileged L3 dataplane loop
Added an agent-side L3 dataplane runner that is independent of the UDP gateway relay path:
- Added
L3DatagramTransportand an in-memoryFakeDatagramEndpointpair for non-privileged integration tests. - Added
L3Dataplane, which pumps packets from TUN to internal L3PTP DATAGRAM payloads and from DATAGRAM payloads back to TUN. - Added metrics counters for TUN RX/TX, DATAGRAM RX/TX, bytes, malformed packets, unsupported IP versions, source-prefix rejects, oversized packets, TUN write errors, and DATAGRAM send errors.
- Added a continuous
run()loop with watch-channel shutdown for later real TUN/QUIC tasks. - Added fake two-endpoint tests proving:
- TUN packet -> L3PTP DATAGRAM -> peer TUN works.
- peer source-prefix rejection drops spoofed packets and increments counters.
- the run loop exits cleanly on shutdown.
- Added prepared tunnel status snapshots for later metrics/status reporting.
Validation:
cargo test -p nexus-agent l3 --libpasses.cargo check --features l3-tunpasses.
Remaining work: expose a clean QUIC REALITY L3 DATAGRAM transport adapter instead of reusing UDP relay session semantics, then run real Linux TUN ping tests with CAP_NET_ADMIN/root.
2026-05-10: Add QUIC REALITY client DATAGRAM adapter for L3
Added a client-side QUIC REALITY DATAGRAM adapter suitable for L3 dataplane integration:
- Added
QuicRealityDatagramClient, which performs the existing QUIC REALITY client handshake and maintains application ACK handling internally. - Exposed a generic
send_datagram/recv_datagraminterface and implementedl3::transport::L3DatagramTransportfor it from the QUIC module side. - Kept L3 independent from UDP gateway relay types; the L3 dataplane sees only the transport trait.
- The adapter currently uses the existing internal QUIC DATAGRAM session tag inside the QUIC layer. This remains an implementation detail and is not exposed through L3.
- Added an explicit
NEXUS_L3_TUN_AUTOSTARTgate inL3TunnelManager; it does not yet auto-start real TUN dataplanes because server-side L3 DATAGRAM registration/dispatch is not complete.
Validation:
cargo checkpasses innexus-agent.cargo test -p nexus-agent l3 --libpasses.cargo check --features l3-tunpasses.
2026-05-10: Use IPv6-safe MTU default and split TUN from network config
Adjusted the L3 tunnel default MTU from 1100 to 1280 so IPv6 can run over the link without violating the IPv6 minimum link MTU requirement. Backend validation now accepts 576 through 9000, while the dataplane still drops inner packets that exceed the configured tunnel MTU.
Moved the agent TUN device implementation from a Linux-named LinuxTunDevice to a feature-gated TunRsDevice, using tun-rs as the cross-platform TUN backend. OS network mutation is now represented by NetworkConfigurator and NetworkConfigPlan; Linux route mutation is implemented first, and non-Linux platforms now have an explicit unsupported boundary for route reconciliation.
Validation:
cargo test -p nexus-agent l3 --libpasses.cargo check --features l3-tunpasses.
2026-05-10: L3 tunnel builder, dual-stack PTP, multicast, and TUN batching
Completed three additional implementation stages for the L3 tunnel path:
Control plane and UI:
- Added L3 network capability fields for dual-stack, IPv6 prefix length, automatic IPv6 link-local addressing, multicast, broadcast, keepalive, offload, multi-queue, and TUN batch size.
- Added a frontend L3 Networks/Tunnels tab split. The tunnel builder now submits the capability fields with the REALITY credential and endpoint definitions.
- L3 endpoint config now carries the peer node connect address and multiplex port so agents can derive the peer QUIC endpoint for runtime startup.
Agent L3 dataplane:
- Added IPv6 packet validation and source-prefix checks.
- Added deterministic automatic IPv6 link-local generation when an endpoint has no configured IPv6 address.
- IPv4 and IPv6 multicast destinations are accepted by default, including OSPFv2
224.0.0.5/6and OSPFv3ff02::5/6, while source validation is still enforced.
TUN performance and platform path:
- Extended
TunConfigwith offload, multi-queue, and batch-size options. - Added
TunDevice::recv_packets/send_packetsbatch entry points. TunRsDeviceenables Linux tun-rs offload/multi-queue options and usesrecv_multiplefor batched/offloaded receive when available.
- Extended
Validation:
cargo test -p nexus-agent l3 --libpasses with IPv6 and OSPF multicast dataplane tests.cargo check --features l3-tunpasses.cargo checkpasses innexus-backend.npm run lintandnpm run buildpass innexus-frontend.
2026-05-10: Replace route shelling and wire L3 runtime autostart
Moved the remaining L3 runtime network mutation away from process execution:
- Replaced Linux
ip routeprocess spawning withrtnetlinknetlink route updates. L3 route apply now looks up the interface index through netlink and writes the kernel route table directly. - Route apply uses replace semantics for idempotent config replay. Route rollback ignores missing-route netlink errors so repeated shutdown/reconcile does not fail on already-clean state.
- Added a QUIC REALITY server-side L3 DATAGRAM registry. Incoming QUIC DATAGRAM frames are inspected for the L3PTP header and
datagram_context_id; registered L3 contexts are delivered to the local TUN dataplane before the generic UDP upstream relay path is considered. - Added
QuicRealityL3DatagramTransport, which sends outbound L3 packets over the client QUIC REALITY DATAGRAM adapter and receives inbound L3 packets from the local server-side context registry. - Extended
L3TunnelManagerwith a runtime task manager. WithNEXUS_L3_TUN_AUTOSTART=1and thel3-tunfeature enabled, config replay now creates the TUN device, applies system routes, registers the DATAGRAM context, connects to the peer QUIC endpoint derived from node config, and runs the TUN/QUIC dataplane until shutdown or config replacement. - Agent shutdown now tears down L3 runtime tasks before shutting down L4 forwarders.
- Fixed Linux offload single-packet I/O.
TunRsDevice::recv_packet()now usestun-rsrecv_multiplewhen offload is enabled, so the virtio header is not exposed as the first byte of an IP packet.send_packet()similarly usessend_multiplewith the required virtio headroom on the offload path. - Added a privileged Linux TUN/UDP kernel-stack test that binds UDP sockets to the TUN address and verifies both kernel-to-TUN receive and TUN-to-kernel delivery without external command execution inside test code.
Validation:
cargo test -p nexus-agent l3 --libpasses with 20 L3/registry tests.cargo test -p nexus-agent l3 --lib --features l3-tunpasses with the privileged real-TUN tests ignored by default.sudo -E cargo test -p nexus-agent l3::tests::privileged_creates_linux_tun_device --lib --features l3-tun -- --ignored --nocapturepasses.sudo -E cargo test -p nexus-agent l3::tests::privileged_linux_tun_exchanges_udp_with_kernel_stack --lib --features l3-tun -- --ignored --nocapturepasses.cargo check --features l3-tunpasses.cargo checkpasses innexus-backend.npm run lintandnpm run buildpass innexus-frontend.
Remaining work: add a two-agent or two-namespace end-to-end ping test over QUIC REALITY DATAGRAM, add live status/metrics reporting back to the backend, and fill in non-Linux route/address backends using system APIs.