End-to-End Web Page Latency Breakdown

Hi,

I wanted to get your feedback on the breakdown between frontend (WAN) vs. backend (server response with database) latency. I have heard

  • 80% frontend and 20% backend latency split

  • ~2.4 seconds frontend and 600 ms second backend (not optimzed, no CDN)

Appreciate your feedback,

Roland

1 Like

Beyond my competence. Sorry

Two reactions -

  • There will be enormous variation in WAN latency, depending on tech (fixed/mobile/satellite) and distance between server and user
  • Regulators (FCC, Ofcom, ARCEP, CRTC) publish stats on network latency, which may be helpful

Rob

Yes, there is a lot of information on WAN latency. And it does vary based on network conditions. However,

I was more interested in the network latency range between the application and database. Any help here would be appreciated.

Roland

Is it possible to recover this information for individual requests? Would
be useful in computing a ‘carbon footprint’ for a user session or
particular server setup.

Are you asking about server response times and client side rendering or
about server side data? The former is available in the HTTP Archive stats.
The latter has significant variance between architecture and is impossible
aggregate usefully.

If you’re looking for server side stats you’ll need to be specific about
the architecture. For example, a Wordpress installation on a specific
Amazon EC2 instance type using RDS will be radically different than a Ruby
site hosted on a single, beefy bare metal server.

I mean server response times plus separate network information (including
WAN). In other words, is there any way to verify an 80/20 split, or is this
hand-waving?

The related question is how practical is it to recover the route for a
particular response through various networks, with the ability to look up
those networks.

My own interest is tying this information to information in The Web Index -
http://thewebindex.org/ - map quality and efficiency of web access against
non-web factors.

In the HTTP Archive data it should be easy enough to calculate for the conditions which it tests. The “back-end time” is the “Time to First Byte” (being generous as it will include DNS and socket connect as well) and the full time is the page load time. Time to first byte / page load time = % of time spent on the back end.

To see the actual user experience you need to go to a RUM tool like GA or SOASTA mPulse and see if they have released aggregate stats (or check for just your site).

I’m pretty sure 80/20 is generous (which is why including DNS and socket connect doesn’t worry me much). I’d be surprised if it wasn’t closer to 90/10