Hello,
There was a recent post by a CDN that shared data about their QUIC connection response sizes (to at least one GET in the request) which have a median of about 7KB. This does not align with HTTP Archive data which show that on average each page pulls down about 200KB+ per conn. Regardless if the page comes down over HTTP/2 or QUIC/HTTP/3, the 7KB is way less than 200KB.
Of course the low median may mean they have many small connections in the CDN’s “total connection dataset”, and just a couple of big conns per page. In many of my tests with webpagetest.org to common web pages, I see that even with caching enabled, the browser pulls down much more data per page. The big chunks of data are sometimes over TLS (HTTP/2) not QUIC.
Is there any HTTP Archive data that can reveal what is the mean server data size in the HTTP/2 or HTTP/3 connections per page? I could not find a way to get this in a BigQuery of the crawl dataset. If we could know there are x HTTP/2 and y HTTP/3 conns per page which download a median of m1 or m2 data respectively per page or av1 and av2 average per page, it would reveal why the discrepancy.
Thank you