Is it possible to recover this information for individual requests? Would
be useful in computing a âcarbon footprintâ for a user session or
particular server setup.
Are you asking about server response times and client side rendering or
about server side data? The former is available in the HTTP Archive stats.
The latter has significant variance between architecture and is impossible
aggregate usefully.
If youâre looking for server side stats youâll need to be specific about
the architecture. For example, a Wordpress installation on a specific
Amazon EC2 instance type using RDS will be radically different than a Ruby
site hosted on a single, beefy bare metal server.
I mean server response times plus separate network information (including
WAN). In other words, is there any way to verify an 80/20 split, or is this
hand-waving?
The related question is how practical is it to recover the route for a
particular response through various networks, with the ability to look up
those networks.
My own interest is tying this information to information in The Web Index - http://thewebindex.org/ - map quality and efficiency of web access against
non-web factors.
In the HTTP Archive data it should be easy enough to calculate for the conditions which it tests. The âback-end timeâ is the âTime to First Byteâ (being generous as it will include DNS and socket connect as well) and the full time is the page load time. Time to first byte / page load time = % of time spent on the back end.
To see the actual user experience you need to go to a RUM tool like GA or SOASTA mPulse and see if they have released aggregate stats (or check for just your site).
Iâm pretty sure 80/20 is generous (which is why including DNS and socket connect doesnât worry me much). Iâd be surprised if it wasnât closer to 90/10