NaaS and the Economics of Latency, as Illustrated by DoubleVerify

Most network folks think of latency primarily in terms of packet round trip time (RTT) when testing how long it takes to send ICMP packets (or TCP trace packets if you’re using fancier monitoring tools) and get a response back from a particular destination IP.  Latency matters a lot to application performance, and ultimately to user experience. A lot of things need to transpire in a short amount of time to maximize the user experience and gain the desired business outcome. So there’s a direct economic impact of latency.

But there’s a whole other economic dimension to latency when it comes to networking these days, which is the provisioning round trip time. Agility in procuring bandwidth and throughput makes a massive difference in terms of the ability of a business to respond to business opportunities. If you can’t get bandwidth in a short provisioning RTT to drive the applications and service user demand, then it doesn’t matter what the packet RTT is or could be. 

We recently published a great case study with DoubleVerify which illustrates the importance of both of these types of latency to your business. In this post, we’ll explore these two RTT concepts and their economic impact, using DoubleVerify’s scenario as an illustrative reference.

 Eyeball economics: latency and user experience

User experience is extremely time-sensitive. The human eye can discern screen digital response in N milliseconds, and it’s common wisdom that users need application response within a couple of seconds or else they start mentally checking out, clicking away from commercial offers, interrupting their productive workflow, and experiencing frustration. Frustration is brand-damaging, whether for a food delivery app company or for an internal IT team tasked with keeping employees productive and engaged.

For IT teams, employee experience has become a C-level concern. As Kim Huffman, CIO at TripActions put it in a 2021 Forbes article, “IT leaders need to put employee aspirations and needs at the heart of all technology decisions.”  A great digital workforce experience is a baseline expectation for millennial and younger workers. If you can’t deliver that reliably, then you’re going to have a much harder time recruiting and retaining talent. 

To repeat the point made ad nauseam in countless blog posts, the pandemic, the move to remote and hybrid work, the accelerated digitization of work, and the labor shortage have only accentuated workforce experience as a competitive differentiator to businesses. This is why many businesses are upgrading their WAN architectures to include more direct connectivity to important cloud and SaaS providers, so they can get lower network latency and higher reliability, and thus deliver strong and secure employee digital experiences.

In the world of consumer digital experience, we all know that latency is an essential factor in things like lost traffic and abandoned carts. However, the latency demands go deeper than most of us are aware of. Advertising is a key pillar supporting the economics of the World Wide Web, and the latency requirements underlying advertising delivery are far more demanding than the user’s experience. DoubleVerify’s business is a great example. 

The online advertising process is quite involved. Before an ad is served to user eyeballs, a whole series of steps must be completed including a bidding process whereby multiple ad serving vendors compete to serve ads and gain revenue in real-time. DoubleVerify’s business is also a part of this process, as they supply intelligence to ensure that ads are going to be served to valid users. You can read the case study for more details, but DoubleVerify has to complete its process and return a validation answer within N milliseconds (!!). That’s not a lot of time. 

This is why they work hard to optimize their network for the lowest possible latency and highest possible predictability. If they can’t deliver on time, no $$. The economics of latency are ever-present for DoubleVerify.

Provisioning latency and time to market economics

Digital business moves fast. If individual user eyeballs expect a fast response to fulfill their demand for a positive user experience, then businesses that serve those individuals also expect a fast response from other businesses to fulfill their users and employees’ digital demands. The cloud has set a very high expectation bar for the timely availability of infrastructure to drive applications and services. If one of your digital customers or prospects has a large opportunity that you can supply, you need to move quickly to secure that business and deliver a great overall product experience. If you can’t get to market fast, you lose out.

This is where provisioning latency comes into play for high-speed interconnectivity. Traditional telco ways of getting bandwidth are very slow. We’re talking weeks to contract and months to provision and deploy bandwidth. In the digital era, this is no longer acceptable in many situations.

Let’s say you launch a new analytics application that allows your customers to get greater intelligence and make smarter product or service choices. It takes off, and you have more demand than you expected, perhaps in a geography you didn’t anticipate when you first provisioned infrastructure and bandwidth to drive that data-intensive application. If you can’t surge your interconnection bandwidth to keep up with the demand for data movement and application workflows, you might end up disappointing users and your brilliant initiative could suffer a black eye. Not good enough.

That’s where NaaS makes a big difference. With PacketFabric NaaS, you can get near-instant provisioning of high-speed connectivity (multi-100G) between data centers, cloud and SaaS providers. You can even economically burst data center interconnections on-demand on an hourly basis. With PacketFabric, you don’t compromise quality. You get a highly redundant, private 50T+ global backbone with diverse availability zones, hardware stacks, network paths in all our PoPs, and service is backed by an openly published five-nines SLA with real teeth. 

If you use our Cloud Router, you get optimal routing latency because unlike other cloud router solutions that have single routing instances that force suboptimal route hairpinning, we spin up distributed Cloud Router VRF resources across our backbone-based on which cloud regions or colo DCs you want to connect.

You can provision PacketFabric services on demand via a self-service portal or REST API. This single interface also reduces complexity by abstracting many different colocation data center and cloud provider provisioning processes, thus further reducing friction and latency in your provisioning process. High-speed interconnectivity has never been so easy and real-time.

This rapid provisioning and carrier-class quality is one of the major reasons why DoubleVerify leverages the PacketFabric NaaS platform for high-speed interconnectivity. They can get to market faster with the secure, low-latency interconnections they need to meet customer demand and grow their business.

Not only does PacketFabric NaaS provision in minutes, you can consume it on a monthly basis, so you retain agility in your overall bandwidth footprint and don’t end up stranding WAN capacity

Learn more

It’s time to think twice about continuing with slow telco-style interconnections and adopt cloud-like NaaS services. Check out the DoubleVerify case study, check out our service offerings, and either request a demo, or register an account and get started today.