Server: Latency & Liveness
Mesh uses an application-level ping/pong system to track connection health, measure latency, and ensure stale connections are cleaned up across instances.
This works independently of WebSocket protocol-level ping frames and gives you:
- Accurate round-trip latency tracking
- Configurable liveness timeouts
- Automatic cleanup of dead connections and their state (rooms, presence, etc.)
Configuration
You can configure ping/liveness behavior via the server constructor:
const server = new MeshServer({
redisOptions: { host: "localhost", port: 6379 },
pingInterval: 30_000, // how often to ping clients
latencyInterval: 5_000, // how often to request latency from clients
maxMissedPongs: 1, // allowed missed pongs before disconnecting
});
With the default maxMissedPongs = 1
, a client has about 2 * pingInterval
to respond before being disconnected.
What happens under the hood
Every connection:
-
Sends a
"ping"
command everypingInterval
- If no
"pong"
is received, it’s marked as inactive - If
missedPongs > maxMissedPongs
, the connection is closed
- If no
-
Sends a
"latency:request"
everylatencyInterval
- When the client responds with
"latency:response"
, Mesh calculates round-trip time and can emit latency stats
- When the client responds with
-
On receiving
"pong"
, the connection is marked as alive and its presence TTL is refreshed
Presence TTL is only refreshed when a valid "pong"
is received, meaning that missed pongs will eventually cause Redis to expire presence entries and trigger a "leave"
event.
Why Mesh doesn’t use native WebSocket pings
- Native pings don’t go through your app layer, so you can’t measure latency
- Native pings don’t support multi-instance cleanup
- Native pings can’t update Redis TTLs for presence tracking
Mesh solves all of that with application-level control.
Monitoring latency (optional)
Mesh automatically tracks round-trip latency.
If you want to observe it on the client:
client.on("latency", (ms) => {
console.log("Latency:", ms, "ms");
});
This is useful for:
- Displaying latency to users (e.g. in a status bar)
- Logging latency over time for diagnostics
- Detecting network slowdowns
There’s no server API for observing latency — it’s purely client-side and
emitted as "latency"
events.
See Client SDK → Latency & Reconnect for configuration on the client side.