Akamai’s State of the Internet report looks well done but I was surprised at some of the numbers. The peak data rates seemed right but the average connection speeds were surprisingly low.
In particular, we have fairly good data for the U.S. from the FCC AskSam testing and they find surprisingly few slowdowns because of congestion. With more than half the U.S. on 50 megabit+ cable or Verizon fiber, I expected much higher numbers. A guy at Akamai pointed me to their blog at https://blogs.akamai.com/2015/02/state-of-the-internet-metrics-what-do-they-mean-1.html which explained the many http requests and small files made a very large difference.
That pointed in turn to the httparchive, for your figure of as many as 100 requests on a web page. It’s easy to understand that makes a difference, but the Akamai data seem to suggest the average speed is half of what I expected.
So I thought to ask on this list for opinions and data. How much of a slow in web browsing should we expect from all those requests, etc.? Has anyone done good research on measuring the impact on speed?
Fastnet.news, 5gwnews.com firstname.lastname@example.org
Good questions. For a full and thorough description of what is going on here - I will point you to Ilya Grigorik’s book “High Performance Browser Networking” https://hpbn.co
Check out chapter 1 and chapter 10 (in fact figure 10-5 may help a lot here.)
When data is coming to your computer, let’s describe the connection as a pipe. Bandwidth is “how big” the pipe is. Latency is “how long” the pipe is, and describes how much time the data spends in the pipe before it gets to you.
There are a lot of reasons that developers distribute their content to data centers around the world, but reducing latency is a big reason. If I am in Seattle - and the file I want is in London - it takes longer to travel the pipe than if it were in San Francisco.
Now - every time you open a connection to a new server - there are 3-5 round-trips of packets between your computer and the server before any data is sent. Let’s assume each round trip is 25ms… say 100ms per connection to a server.
If you are watching a video - you “pay” the connection startup of 100ms once - and then your video can stream at 50 MBPS. If you are opening a webpage - with >100 requests (but from 20 different servers) - you have to pay this connection setup cost 20 times. Let’s say the webpage is 2 MB - which gives us averages of 20 KB per request/100KB per connection.
So if you have 100 KB at 50 MBPS - 50/8 = 6 Megabytes per second - it takes around 16 ms for the files to download (per connection). But you also have this 100ms “connection time” - which dwarfs the file download time. Even if all 2Mb are on 1 server - you have 100ms to set up the connection, and 320ms to download (~25% due to latency). And that is what is making websites slow.
This chart from Ilya’s book also will help:
Your last question on research - yes everyone here is looking into how to make webpages faster. However, everything described above is much more complicated, and the way a webpage is built will also have an effect on how long the page loads. Finally - even if a website loads fast for people on 1GBPS internet - it also has to load fast for their customers on cruddy hotel Wi-Fi or on a 2012 Samsung S3.
Very helpful information, clearly put. But I’m also looking for research on the question of ** how large** is the impact. Akamai’s numbers were 50% to 75% lower than I would have expected.
Has anyone tried to quantify the effect?
video will max out your pipe… say 50 MBPS. Websites will be a small fraction of that say 6MBPS.
Video is~50% of the internet.
Average really big + really small - and you come out somewhere in the middle.
Also, a 20 or 100 KB File won’t peg your bandwidth. As the blog post states, your car CAN go 70 MPH, but in town you only go 25. These small requests download too quickly for the speed to max out (tcp slow start being one reason)