Network requirements
DisplaySync kiosks are outbound-only. They initiate every connection — no inbound ports, no public IP, no port forwarding. That's intentional: it makes signs deployable on locked-down venue networks that would never accept inbound configuration, and removes the entire class of attacks that come with exposed services.
This page is what you hand to venue IT. The headline answer to "what do you need from us?" is: port 443 outbound to the destinations below, no inbound, DHCP fine.
Outbound allowlist
Required for any deployment:
| Destination | Port | Protocol | Purpose |
|---|---|---|---|
api.displaysync.live | 443 | HTTPS | REST API, health checks, claim flow |
api.displaysync.live | 443 | WSS | Real-time WebSocket — heartbeats, remote commands, content updates |
updates.displaysync.live | 443 | HTTPS | Auto-update binaries (Cloudflare R2) |
*.ingest.us.sentry.io | 443 | HTTPS | Error tracking |
| Microsoft Update servers | 443 | HTTPS | Windows security patches |
Required only if you're using Tailscale for remote access (recommended):
| Destination | Port | Protocol | Purpose |
|---|---|---|---|
*.tailscale.com | 443 | HTTPS | Coordination server |
derp.tailscale.com (and regional DERP relays) | 443 | HTTPS | Encrypted relay fallback when direct peer connection fails |
| Various peers | 41641 | UDP | Direct WireGuard peer-to-peer (preferred path) |
Required only for content URLs you assign:
| Destination | Port | Protocol | Purpose |
|---|---|---|---|
| Whatever your content URLs resolve to | 443 (typically) | HTTPS | The actual webpages signs display |
If your content lives at demo.displaysync.live, that needs to be reachable too. If your content is on a customer subdomain or a venue-internal page, allowlist that origin.
Strict-allowlist environments
If venue IT can only allowlist specific domains (no wildcards), the minimum
set is: api.displaysync.live, updates.displaysync.live,
o4511146269540352.ingest.us.sentry.io, plus your content URLs and (if
using Tailscale) controlplane.tailscale.com and login.tailscale.com.
Tailscale's DERP relays auto-discover; on a strict allowlist, signs may fall
back through a single relay rather than the closest one — still works,
slightly higher latency.
What kiosks do not need
Specifically called out so you can reassure venue IT:
- No inbound ports. Block all incoming traffic at the firewall.
- No public IP. RFC1918 / NAT is fine.
- No DNS rebinding. Standard DNS works.
- No multicast / mDNS. Helpful for some printers; irrelevant to signage.
- No SMB, RDP, or VNC inbound from the venue network. Hands-on access happens via Tailscale.
- No domain join. Signs are local-account only.
Bandwidth
Per sign, steady-state:
| Workload | Average | Peak |
|---|---|---|
| Heartbeats + WebSocket idle | under 1 KB/s | 5 KB/s |
| Content (a typical signage webpage) | 50–500 KB/s after first paint | 5–10 Mbps initial load |
| Auto-update download | 0 (rare) | 50–80 Mbps for ~30 seconds |
| Sentry error reports | under 100 B/s | 10 KB/s |
For a fleet of 50 signs sharing a venue uplink: plan ~10–20 Mbps sustained, with brief 100+ Mbps bursts when an auto-update lands or content does a heavy first paint.
This is well inside any modern venue feed. The exception is rented hotspots or 4G/5G fallback links — there, plan for content size carefully and avoid auto-updates during the event.
Latency tolerance
DisplaySync is real-time-ish. The dashboard's "live" status is built on a 5-second heartbeat. Acceptable network behavior:
- Round-trip latency to backend: under 500 ms is comfortable. Up to 2 seconds is acceptable. Above 2 seconds, the WebSocket may reconnect frequently.
- Packet loss: under 1% sustained. Bursts to 5% during conference Wi-Fi peaks are fine — the WebSocket reconnects.
- Jitter: doesn't matter for normal operation.
The desktop sign keeps the displayed URL cached locally and continues displaying it during network outages. So a sign on a flaky network shows correct content; only its dashboard health flips between online and offline.
Offline behavior
The desktop sign is built to survive network failure gracefully. When the WebSocket disconnects:
- The currently-displayed content keeps displaying. No "offline" overlay, no QR fallback.
- The sign tries to reconnect every 5 seconds with exponential backoff up to 60 seconds.
- The dashboard transitions the sign to Offline state after ~30 seconds of missed heartbeats.
- When the connection comes back, the sign reports back online within ~5 seconds and replays any queued state changes.
If you're shipping signs into a venue with known-spotty Wi-Fi, this is the safety net that keeps the wall correct even when the dashboard says it's not reachable.
Wired vs Wi-Fi
Wired Ethernet, every time. Wi-Fi is the leading cause of "this sign is acting up" support tickets. The reasons:
- Wi-Fi captive portals frequently re-authenticate, dropping the WebSocket
- Venue Wi-Fi gets congested as crowds arrive — exactly when you need it
- Wi-Fi signal quality varies by antenna orientation, which moves when somebody bumps the kiosk
- DNS over Wi-Fi often goes through a venue captive portal that does weird things
If you must use Wi-Fi:
- Pre-stage SSIDs on the kiosk image. See Base Windows setup → Pre-stage venue WiFi profiles.
- Avoid open networks with captive portals. They will silently break. Ask the venue for a hidden SSID with WPA2 and skip the portal.
- Place kiosks within ~30 ft of a known good AP. Don't trust "the venue says coverage is everywhere" — always test from the actual sign location.
- Carry a portable AP (a small travel router on your own LTE/5G feed). For tier-1 events this is cheaper than the alternative.
VLANs and segmentation
A reasonable production posture:
- Put kiosks on a dedicated VLAN with no inbound visibility from the rest of the venue network.
- The VLAN is outbound-only to the allowlist above.
- Block kiosk-to-kiosk traffic at the switch — they have no reason to talk to each other.
- If your venue has guest Wi-Fi, do not put signs on it — guest networks routinely have client isolation, captive portals, and aggressive QoS that all interact badly with kiosks.
If venue IT can't provide a dedicated VLAN, deploying signs on the venue's normal corporate VLAN is fine if outbound allowlisting is in place.
DHCP, DNS, NTP
- DHCP: standard. Static IPs are not required, but useful for documentation.
- DNS: standard. Signs use the system resolver; whatever the venue hands them works. Public resolvers (1.1.1.1, 8.8.8.8) work too if you set them statically.
- NTP: the kiosk must have a roughly-correct clock for TLS handshakes to succeed. Windows time service syncs from
time.windows.comby default, which is on most venue allowlists. If venue IT blocks it, point time service atpool.ntp.orgor a venue-provided NTP server.
What about IPv6?
DisplaySync works over IPv6. We don't require it. If your venue is dual-stack or v6-only, signs come up on whichever address family DHCP/RA provides. No configuration needed.
Auditing connectivity from a sign
From a kiosk PowerShell session (Windows) — useful for troubleshooting:
# Backend reachable
Test-NetConnection api.displaysync.live -Port 443
# WebSocket upgrade reachable (smoke test)
Invoke-WebRequest "https://api.displaysync.live/health" -UseBasicParsing |
Select-Object -ExpandProperty StatusCode
# Updates host
Test-NetConnection updates.displaysync.live -Port 443
# Tailscale (if installed)
& "C:\Program Files\Tailscale\tailscale.exe" netcheck
Each should respond cleanly. If any fail, the venue is blocking the destination — work with IT to allowlist it before continuing.
What's next
You've spec'd hardware and network. Next up: build the actual image. Continue to Windows kiosk image.