Latest stable release: Frontier v1.2.2

Backend services need to
reach online edge nodes

Frontier is a service-to-edge gateway for long-lived connections. Use it when backend services and edge nodes both need to actively call, notify, and open streams to each other.

Service -> specific edge RPCEdge -> service callbacksMessaging + streams on one data plane
frontier-start.sh
1# Start Frontier and run the fastest demo path
2docker run -d --name frontier \
3 -p 30011:30011 # Service-bound port
4 -p 30012:30012 # Edge-bound port
5 singchia/frontier:1.2.2
6make examples
7./bin/chatroom_service # service side
8./bin/chatroom_agent # edge side

Use Frontier when

A backend service needs to call a specific online device, agent, or connector
Edge nodes need to call backend services without opening inbound ports
You need RPC, messaging, and streams on the same long-lived connection model
Your system is service <-> edge, not just service <-> service

Do not use Frontier when

You only need service-to-service RPC. Use gRPC.

You only need HTTP ingress or proxying. Use Envoy or an API gateway.

You only need pub/sub or event streaming. Use NATS or Kafka.

You only need a generic tunnel. Use frp or another tunnel tool.

One connection model, three primitives

The reason Frontier feels different is that RPC, messaging, and streams are part of the same service-to-edge model.

Bidirectional RPC

Address a specific online edge node from a backend service, or let edge nodes call backend services back over the same communication model.

Topic Messaging

Push telemetry, events, and notifications between services and edges, with explicit acknowledgments and optional forwarding to external MQ.

P2P Multiplexing

Open direct streams for proxying, file transfer, media relay, or custom protocols when RPC is not enough.

Cloud-Native by Design

Start with a single container, then move to clustered deployment when your service-to-edge fleet grows.

DockerStandalone container
ComposeLocal cluster
HelmK8s deployment
OperatorHA & Scale