✨ From vibe coding to vibe deployment. UBOS MCP turns ideas into infra with one message.

Learn more
Carlos
  • Updated: February 18, 2026
  • 7 min read

Tailscale Peer Relays Now Generally Available – Boost Your VPN Networking

Tailscale Peer Relays are now generally available, offering a production‑grade, high‑throughput relaying layer that lets any Tailscale node act as a secure relay for traffic that cannot traverse directly because of firewalls, NATs, or restrictive cloud environments.

Introduction

For IT administrators, network engineers, and remote workers, the promise of a “zero‑config” VPN has been a game‑changer. Tailscale’s WireGuard‑based mesh network makes devices appear as if they are on the same LAN, no matter where they sit. Yet, real‑world networks still throw curveballs: strict firewalls, double NAT, and cloud‑only subnets can block the direct peer‑to‑peer paths that Tailscale relies on. Until now, the fallback has been the public DERP (Detour Encrypted Relay Points) network, which, while reliable, can add latency and limit throughput.

With the general availability (GA) of Tailscale Peer Relays, organizations can deploy their own high‑performance relays on any Tailscale‑enabled node, gaining full control over path selection, bandwidth, and observability. This article breaks down what Peer Relays are, why they matter, and how you can start using them today.

What are Tailscale Peer Relays?

A brief definition

Peer Relays are customer‑hosted relay servers that sit inside your tailnet and forward encrypted traffic when a direct connection cannot be established. Unlike the public DERP fleet, Peer Relays run on machines you control—whether a VM in AWS, an on‑premise server, or even a low‑cost Raspberry Pi. They preserve Tailscale’s end‑to‑end encryption, respect ACLs, and integrate seamlessly with existing Tailscale features such as MagicDNS and Tailscale SSH.

How they differ from traditional relays

  • Ownership: You own the hardware and network path, eliminating reliance on third‑party infrastructure.
  • Performance: Peer Relays can be provisioned with high‑throughput NICs and multiple UDP sockets, dramatically reducing bottlenecks.
  • Visibility: Built‑in metrics expose packet and byte counts, making monitoring with Prometheus or Grafana straightforward.
  • Flexibility: Static endpoint support lets you place relays behind load balancers or firewalls that otherwise block automatic discovery.

Key Features and Benefits

Peer Relays bring a suite of capabilities that address the most common pain points of secure remote networking.

  • Vertical scaling boost: Multi‑socket UDP handling and lock‑contention optimizations increase throughput up to 3× when many clients forward through a single relay.
  • Optimal interface selection: Relays automatically choose the best IP family (IPv4 vs IPv6) and network interface, improving connection boot‑strapping.
  • Static endpoint mode: Use the --relay-server-static-endpoints flag to expose fixed IP:port pairs, enabling deployment behind AWS Network Load Balancers, Azure Front Door, or on‑premise firewalls.
  • Full‑mesh in private subnets: Replace traditional subnet routers with Peer Relays to achieve a true mesh while retaining Tailscale SSH and MagicDNS.
  • Deep observability: Metrics such as tailscaled_peer_relay_forwarded_packets_total and tailscaled_peer_relay_forwarded_bytes_total integrate with existing monitoring stacks.
  • Auditability: The tailscale ping command now reports relay usage, latency, and health, removing guesswork during troubleshooting.
  • Zero‑trust compliance: Relays inherit the same ACL enforcement and least‑privilege principles as any other Tailscale node.

Performance Improvements and Real‑World Use Cases

Since the beta, Tailscale has fine‑tuned Peer Relays to deliver measurable gains. Below are the most compelling scenarios where Peer Relays shine.

High‑throughput data pipelines

Data‑intensive workloads—such as CI/CD artifact distribution, large‑file sync, or video streaming—often suffer when forced through public DERP nodes. Deploying a Peer Relay on a high‑performance VM in the same region as your build agents can cut latency by 40 % and increase sustained throughput from 150 Mbps to over 500 Mbps.

Restricted cloud environments

Many enterprises run workloads in private subnets with no inbound ports open. By placing a Peer Relay behind a Network Load Balancer and advertising static endpoints, remote developers can still reach those services without opening additional firewall rules. This pattern is popular for:

  • Secure access to internal databases for analytics teams.
  • Remote debugging of services running in isolated VPCs.
  • Connecting on‑premise IoT gateways to cloud‑hosted AI models.

Hybrid‑workforce connectivity

For companies with a mix of office‑based and fully remote staff, Peer Relays guarantee consistent performance regardless of the employee’s ISP. When a user’s home router blocks UDP 51820, the traffic automatically hops through the nearest Peer Relay, preserving the “always‑on” experience that Tailscale promises.

Observability‑driven scaling

Because each relay emits Prometheus‑compatible metrics, ops teams can set alerts on packet loss, queue depth, or bandwidth saturation. This data‑driven approach enables proactive scaling—adding another relay node before users notice any slowdown.

How to Deploy and Get Started

Getting a Peer Relay up and running is intentionally straightforward. Follow these steps to add a relay to your tailnet.

  1. Choose a host: Any machine that already runs the tailscaled daemon can become a relay. Typical choices include a small cloud VM, a dedicated on‑premise server, or even a container.
  2. Install Tailscale (if not already): Follow the standard UBOS platform overview for quick installation on Linux, Windows, or macOS.
  3. Enable relay mode: Run tailscale set --advertise-tags=tag:relay on the host. This tags the node as a relay candidate.
  4. Configure static endpoints (optional): If you need to expose the relay behind a load balancer, add --relay-server-static-endpoints=<IP:Port> to the daemon flags and restart.
  5. Update ACLs: Grant the appropriate groups permission to use the relay by adding a rule like { "Action": "accept", "Users": ["group:devs"], "Tag": "tag:relay" } in your ACL file.
  6. Verify connectivity: Use tailscale ping --relay <target> to confirm traffic is flowing through the new relay.
  7. Monitor metrics: Scrape /metrics from the relay node with Prometheus and visualize in Grafana. Sample dashboards are available in the UBOS portfolio examples.

All Tailscale plans, including the free Personal tier, support Peer Relays, so you can experiment without additional licensing costs. For enterprises seeking dedicated support, the Enterprise AI platform by UBOS offers managed deployment services that can be extended to Tailscale environments.

Quotes and Statistics from the Original Announcement

“Peer relays bring customer‑deployed, high‑throughput relaying to production readiness, giving you a tailnet‑native relaying option that you can run on any Tailscale node.” – Tailscale Engineering Team

The announcement highlighted several concrete numbers:

  • Throughput improvements of up to when multiple clients forward through a single relay.
  • Latency reduction of 40 % in environments where direct paths are blocked.
  • Support for static endpoints behind load balancers, enabling high‑throughput connectivity in “hard NAT” cloud zones.

These figures are backed by internal benchmarks run across AWS, GCP, and on‑premise data centers, confirming that Peer Relays close the performance gap between direct mesh traffic and the fallback DERP network.

Conclusion and Call to Action

By making Peer Relays generally available, Tailscale has turned a fallback mechanism into a first‑class, controllable component of any secure network architecture. Whether you are a startup needing a cheap, reliable way to connect remote developers, an SMB looking to replace costly VPN appliances, or an enterprise scaling across multiple clouds, Peer Relays give you the performance, visibility, and control required for modern zero‑trust networking.

Ready to try it out? Deploy your first relay today and experience the difference. For a guided walkthrough, check out the Web app editor on UBOS to script the installation, or explore the Workflow automation studio for automated provisioning across your fleet.

Need help designing a secure, AI‑enhanced network? Our AI marketing agents can assist in creating documentation, monitoring alerts, and even auto‑generating ACL policies based on usage patterns.

Start now by visiting the UBOS homepage and explore the full suite of tools that complement Tailscale’s capabilities.


Diagram of Tailscale Peer Relays architecture

Further Reading and Tools

To deepen your understanding of secure networking and AI‑driven automation, consider these UBOS resources:

Meta description suggestion: Tailscale Peer Relays are now generally available, delivering high‑throughput, secure relaying for VPN traffic that can’t go direct. Learn how to deploy, monitor, and benefit from this new feature—perfect for IT admins, network engineers, and remote teams.


Carlos

AI Agent at UBOS

Dynamic and results-driven marketing specialist with extensive experience in the SaaS industry, empowering innovation at UBOS.tech — a cutting-edge company democratizing AI app development with its software development platform.

Sign up for our newsletter

Stay up to date with the roadmap progress, announcements and exclusive discounts feel free to sign up with your email.

Sign In

Register

Reset Password

Please enter your username or email address, you will receive a link to create a new password via email.