The quiet plan to make the internet feel faster
A few months ago, I downgraded my internet, going from a 900Mbps plan to a 200Mbps one. Now, I find that websites can sometimes take a painfully long time to load, that HD YouTube videos have to stop and buffer when I jump around in them, and that video calls can be annoyingly choppy.
In other words, pretty much nothing has changed. I had those exact same problems even when I had near-gigabit download service, and Im probably not alone. Im sure many of you have also had the experience of cursing a slow-loading website and growing even more confused when a speed test says that your internet should be able to play dozens of 4K Netflix streams at once. So what gives?
Like any issue, there are many factors at play. But a major one is latency, or the amount of time it takes for your device to send data to a server and get data back it doesnt matter how much bandwidth you have if your packets (the little bundles of data that travel over the network) are getting stuck somewhere. But while people have some idea about how latency works thanks to popular speed tests, including a ping metric, common methods of measuring it havent always provided a complete picture.
The good news is that theres a plan to almost eliminate latency, and big companies like Apple, Google, Comcast, Charter, Nvidia, Valve, Nokia, Ericsson, T-Mobile parent company Deutsche Telekom, and more have shown an interest. Its a new internet standard called L4S that was finalized and published in January, and it could put a serious dent in the amount of time we spend waiting around for webpages or streams to load and cut down on glitches in video calls. It could also help change the way we think about internet speed and help developers create applications that just arent possible with the current realities of the internet.
Before we talk about L4S, though, we should lay some groundwork.
There are a lot of potential reasons. The internet is a series of tubes vast network of interconnected routers, switches, fibers, and more that connect your device to a server (or, often, multiple servers) somewhere. If theres a bottleneck at any point in that path, your surfing experience could suffer. And there are a lot of potential bottlenecks the server hosting the video you want to watch could have limited capacity for uploads, a vital part of the internets infrastructure could be down, meaning the data has to travel further to get to you, your computer could be struggling to process the data, etc.
The real kicker is that the lowest-capacity link in the chain determines the limits of whats possible. You could be connected to the fastest server imaginable via an 8Gbps connection, and if your router can only process 10Mbps of data at a time, thats what youll be limited to. Oh, and also, every delay adds up, so if your computer adds 20 milliseconds of delay, and your router adds 50 milliseconds of delay, you end up waiting at least 70 milliseconds for something to happen. (These are completely arbitrary examples, but you get the point.)
In recent years, network engineers and researchers have started raising concerns about how the traffic management systems that are meant to make sure network equipment doesnt get overwhelmed may actually make things slower. Part of the problem is whats called buffer bloat.
Right? But to understand what buffer bloat really is, we first have to understand what buffers are. As weve touched on already, networking is a bit of a dance; each part of the network (such as switches, routers, modems, etc.) has its own limit on how much data it can handle. But because the devices that are on the network and how much traffic they have to deal with is constantly changing, none of our phones or computers really know how much data to send at a time.
To figure that out, theyll generally start sending data at one rate. If everything goes well, theyll increase it again and again until something goes wrong. Traditionally, that thing going wrong is packets being dropped; a router somewhere receives data faster than it can send it out and says, Oh no, I cant handle this right now, and just gets rid of it. Very relatable.
While packets being dropped doesnt generally result in data loss weve made sure computers are smart enough to just send those packets again, if necessary its still definitely not ideal. So the sender gets the message that packets have been dropped and temporarily scales back how its data rates before immediately ramping up again just in case things have changed within the past few milliseconds.
Thats because sometimes the data overload that causes packets to drop is just temporary; maybe someone on your network is trying to send a picture on Discord, and if your router could just hold on until that goes through, you could continue your video call with no issues. Thats also one of the reasons why lots of networking equipment has buffers built in. If a device gets too many packets at once, it can temporarily store them, putting them in a queue to get sent out. This lets systems handle massive amounts of data and smooths out bursts of traffic that could have otherwise caused problems.
It is! But the problem that some people are worried about is that buffers have gotten really big to ensure that things run smoothly. That means packets may have to wait in line for a (sometimes literal) second before continuing on their journey. For some types of traffic, thats no big deal; YouTube and Netflix have buffers on your device as well, so you dont need the next chunk of video right this instant. But if youre on a video call or using a game streaming service like GeForce Now, the latency introduced by a buffer (or several buffers in the chain) could actually be a real problem.