How Is Server-Timing used on the web?

I was curious to see where Server-Timing was implemented on the web, so I started searching the HTTP Archive for sites using it. Interestingly enough, there were no sites in the HTTP Archive that had Server-Timing response headers before 3/1/2017. Since then it’s usage has been gradually increasing each month. As of July 2017, there are 72 sites and 352 HTTP responses containing Server-Timing headers.

Here’s a query that I wrote to find a list of sites with Server-Timing headers. You can run it on BigQuery here :

    SELECT p.url, p.rank, count(*) as freq
    FROM 
      ( SELECT pageid, url, rank 
        FROM [httparchive:runs.latest_pages] 
      ) p
    JOIN  
      ( SELECT pageid,url,respOtherHeaders 
        FROM [httparchive:runs.latest_requests] 
        WHERE LOWER(respOtherHeaders) CONTAINS "server-timing"
      ) r
    ON p.pageid = r.pageid
    GROUP BY p.url, p.rank
    ORDER BY freq DESC

image

Next I was curious to see the actual Server-Timing headers to understand how they are being used. I wrote this query to strip out the server-timing headers from the respOtherHeaders column. The query has a lot of string manipulations to extract the text. I’ve added some comments to the query to help make it easier to follow -

    SELECT HOST(url), status, mimeType,
    IF (
        -- In respOtherHeaders, remove everything to the left of the server-timing header.   Is there a ; in the remaining string?
        RIGHT(respOtherHeaders,LENGTH(respOtherHeaders)-INSTR(LOWER(respOtherHeaders),"server-timing")+1) CONTAINS ";",
        -- Then strip out everything to the left of the server-timing header and everything to the right of the ";" character.
        LEFT(
            RIGHT(respOtherHeaders,
                LENGTH(respOtherHeaders)-INSTR(LOWER(respOtherHeaders),"server-timing")+1),
            INSTR(
                RIGHT(respOtherHeaders,
                    LENGTH(respOtherHeaders)-INSTR(LOWER(respOtherHeaders),
                    "server-timing")+1),
                ";")
            ), 
        -- If there are no ; chars, then the respOtherHeaders column ends with server-timing.  Remove everything to the left of server-timing
        RIGHT(respOtherHeaders,
            LENGTH(respOtherHeaders)-INSTR(LOWER(respOtherHeaders),
            "server-timing")+1)
    ) as ServerTimingHeader
    FROM [httparchive:runs.latest_requests] 
    WHERE LOWER(respOtherHeaders) CONTAINS "server-timing"

You can see it on BigQuery here

Looking at the Server-Timing headers, I can see 3 different types of data being sent in these headers:

Timing Metrics - such as application time, gateway time, image resize time, etc
Server-Timing = app=133.99219512939;
Server-Timing = gateway-time=25;
Server-Timing = resizer=0.015;
server-timing = view=2.188, db=0.000, total=61.743, x-runtime = 0.069661, x-ua-compatible = ie=edge,chrome=1, x-frame-options = SAMEORIGIN
Server-Timing = app=53.393840789795;

Caching Information
server-timing = hit, cf-cache-status = HIT
server-timing = cold, cf-cache-status = MISS
server-timing = hit, cf-cache-status = HIT
server-timing = cold, cf-cache-status = MISS

Informational
server-timing = ibs_5318e9209631=1
server-timing = ibs_1964fd84c9cea3=1
server-timing = ibs_1880bf4b0f6311=1

Looking at all of the data in the 7/1/2017 tables, I can see that the current examples in the wild are split between these 3 types of data, although the caching ones are mostly on images, the informational ones are set on JS and responses w/ no content (ie, 204s -likely beacon responses), and the actual timing metrics are spread across a wide range of content.

It’ll be interesting to see how this evolves as Server-Timing adoption increases over time.

2 Likes

Great stuff, thanks for sharing!

I think cf-cache-status is a different header.

The numbers show that by an overwhelming margin it’s being used more to share meta data than timing. I’m surprised that it’s not closer.

Thanks for doing the legwork!

It looks like some of the headers in the respOtherHeaders field are ; delimited while others are , delimited. That may explain how cf-cache-status snuck in there.

Also @cvazac asked me offline what the trend line looked like for sites using Server-Timing.

The query for that was:

    SELECT label, count(distinct p.url)
    FROM 
      ( SELECT label, pageid, url, rank 
        FROM [httparchive:runs.2017_07_01_pages],
             [httparchive:runs.2017_06_15_pages],
             [httparchive:runs.2017_06_01_pages],
             [httparchive:runs.2017_05_15_pages],
             [httparchive:runs.2017_05_01_pages],
             [httparchive:runs.2017_04_15_pages],
             [httparchive:runs.2017_04_01_pages],
             [httparchive:runs.2017_03_15_pages],
             [httparchive:runs.2017_03_01_pages],
             [httparchive:runs.2017_02_15_pages],
             [httparchive:runs.2017_02_01_pages] 
      ) p
    JOIN  
      ( SELECT pageid,url,respOtherHeaders 
        FROM [httparchive:runs.2017_07_01_requests],
             [httparchive:runs.2017_06_15_requests],
             [httparchive:runs.2017_06_01_requests],
             [httparchive:runs.2017_05_15_requests],
             [httparchive:runs.2017_05_01_requests],      
             [httparchive:runs.2017_04_15_requests],
             [httparchive:runs.2017_04_01_requests],
             [httparchive:runs.2017_03_15_requests],
             [httparchive:runs.2017_03_01_requests],
             [httparchive:runs.2017_02_15_requests],
             [httparchive:runs.2017_02_01_requests] 
        WHERE LOWER(respOtherHeaders) CONTAINS "server-timing"
      ) r
    ON p.pageid = r.pageid
    GROUP BY label

@cvazac @paulcalvano cf-cache-status is a header that Cloudflare includes in HTTP responses when proxy CDN mode is on. However, notice that in the data above, it is always preceded by a hit or cold value. This cannot be a coincidence. It seems that the websites are using server-timing to make the value of cf-cache-status (HIT or MISS) available to JavaScript in their client-side web app.

Although I don’t know how they’re doing it. Afaik, if Cloudflare has a HIT, the request doesn’t even reach the website’s (origin) server, so how can the website’s server set the server-timing header in that case?

There were a few examples where the cf-cache-status header did not match up to the server timing hit or cold value. For example

server-timing = miss, cf-cache-status = HIT

I suspect the site that implemented this was tracking their origin cache’s status in a server-timing header. If that is the case, and multiple cache tiers are utilized (origin + CDN) then server-timing header set by the origin server would likely be swapped into the CDN cache and become more of an informational metric (perhaps useful for debugging) rather than a real time cache status.

@paulcalvano I think we’re hitting some false positives in above query: cf-cache-status, timing-allow-origin, acao - these are all standalone header fields. Not sure what exactly is going wrong in above query, but we can probably avoid a bunch of edge cases here by looking at the HAR data, which contains the JSON-encoded response headers.

CREATE TEMPORARY FUNCTION getHeaders(arr STRING)
RETURNS STRING
LANGUAGE js AS """
  try {
    var headers = JSON.parse(arr);
    var st = headers.find(function(e) { 
      return e['name'].toLowerCase() === 'server-timing'
    });
    return st ? st['value'] : null;
  } catch (e) {
    return [];
  }
""";

SELECT * FROM (
  SELECT 
    url, getHeaders(JSON_EXTRACT(payload, '$.response.headers')) as st
  FROM httparchive.har.2017_07_01_chrome_requests 
) WHERE st != ""

Some examples Server-Timing values, based on above:

ibs_d60693ef000eb=1
ibs_33968ba999d7f=1
"ssr=0.161,fetch_data=0.098"
ibs_e01814003491b=1
ibs_14b97674ae49a5=1
_v1_nearest_region=0.043
"view=0.000, db=2.848, total=32.724"
"view=0.000, db=2.140, total=24.670"
ibs_10fcc10c812fdb=1
ibs_10111b2d59e017=1
ibs_5c6f3fa2213b1=1
"view=129.949, db=183.955, total=361.542"
ibs_394d30379c8bd=1
"view=0.190, db=2.409, total=20.650"
"view=0.243, db=1.935, total=206.689"
"Bootstrap=27, SQL=22, Process=150, Total=177"
1 Like

Thanks @igrigorik. I hadn’t tried querying the HAR files for this type of information before. Very cool!

It does look like there are still some discrepencies between the examples that both queries found.

The additional headers were added to my output because of how the respOtherHeaders column is formatted with comma delimiters. If I would have attempted to extract a server-timing header by a comma, I’d miss some server-timing data. When I attempted to extract based on a semicolon I had extra data.

If we take a look at pageid=78751326 & requestid=532345152 for example, I can see that respOtherHeaders contains the following:

status = 200, x-content-type-options = nosniff, content-security-policy = upgrade-insecure-requests, server-timing = cold, cf-cache-status = MISS, strict-transport-security = max-age=5184000; includeSubDomains, access-control-allow-origin = *, link = <http://www.accommodationforstudents.com//images/properties/150118_Main.jpg>; rel="canonical", cf-ray = 379c546c3e641e8f-SJC, x-frame-options = SAMEORIGIN

I was able to run your query, but do not see this example and others like it in the output. I do see it in the json payload for page="http://www.accommodationforstudents.com/" and url = "https://img.gs/vvjnmlqfld/200/http://www.accommodationforstudents.com//images/properties/150118_Main.jpg" , and since name=‘server-timing’ and the value is not “”, I’m not sure why it wasn’t caught…

image

Interesting. Zeroing in on that record in particular…

CREATE TEMPORARY FUNCTION getHeaders(arr STRING)
RETURNS STRING
LANGUAGE js AS """
  try {
    var headers = JSON.parse(arr);
    var st = headers.find(function(e) { 
      return e['name'].toLowerCase() === 'server-timing'
    });
    return st ? st['value'] : null;
  } catch (e) {
    return [e];
  }
""";

  SELECT 
    url, 
    JSON_EXTRACT(payload, '$.response.headers') as st,
    getHeaders(JSON_EXTRACT(payload, '$.response.headers'))
  FROM httparchive.har.2017_07_01_chrome_requests 
  where page = "http://www.accommodationforstudents.com/" 
  AND url = "https://img.gs/vvjnmlqfld/200/http://www.accommodationforstudents.com//images/properties/150118_Main.jpg"

If you run that, we get a JSON exception:

SyntaxError: Unexpected token in JSON at position 688

The culprit is:

{“name”:“etag”,“value”:““im-vGx8npx7m3l64n42Wy””}

Chrome’s export as HAR for that record returns correctly encoded output:

{
“name”: “etag”,
“value”: “"im-vGx8npx7m3l64n42Wy"”
},

It appears that we have an escaping issue somewhere in our pipeline; the plot thickens. :wink:

@rviscomi @patmeenan I’m trying to hunt down the source HAR, but not having a lot of luck. Any ideas what might be going on here? I suspect the HAR file is correct, but perhaps the BQ encode logic is not applying correct escaping?

Ooh - a mystery :). The requestid for this example, matches up with the page id 78751326. So a search for

SELECT pageid, wptid, wptrun, url
FROM [httparchive:runs.2017_07_01_pages] 
WHERE  pageid = 78751326

tells me that we’re looking at run #3 from test ID 170701_15_5RTJ. So this should be where you can download the HAR file from - http://httparchive.webpagetest.org/result/170701_15_5RTJ/3/details/.

A quick look at the HAR file shows that the value in the HAR is escaped (pasted below w/ spaces removed for readability)

 {
   "name": "etag",
   "value": "\"im-vGx8npx7m3l64n42Wy\""
},

The quotes are 0x22 chars and appear to be escaped with a backslash. So it looks like this is not an issue with the HAR file:

image

Thanks for digging up the test ID. The HAR file that went into the BQ pipeline is here (chrome-Jul_1_2017/170701_15_5RTJ.har.gz). It is basically the same, just not pretty-printed in the raw form:

{"name":"etag","value":"\"im-vGx8npx7m3l64n42Wy\""}

Definitely looks like it is downstream in BQ somewhere.

I ran it but not seeing a JSON exception. Am I missing something?

Scroll all the way to the right. The getHeaders(JSON_EXTRACT(payload, '$.response.headers')) function call is what is returning the error -

Also, running the following query to understand the scope of the JSON parsing error (outside of server-timing data) -

CREATE TEMPORARY FUNCTION getHeaders(arr STRING)
RETURNS STRING
LANGUAGE js AS """
  try {
    var headers = JSON.parse(arr);
    var st = headers.find(function(e) { 
      return e['name'].toLowerCase() === 'date'
    });
    return "No Error";
  } catch (e) {
    return "Error";
  }
""";

  SELECT getHeaders(JSON_EXTRACT(payload, '$.response.headers')) as header, count(*)
  FROM httparchive.har.2017_07_01_chrome_requests   
  GROUP BY header

Shows that there are a lot of errors parsing something as simple as a date out of the imported HAR files. Since it’s fairly common for etags to be quoted, I imagine that this bug would affect other queries that are run against the json in the HAR.

image

@paulcalvano that’s a lot of errors :scream: … on the bright side, there’s probably a common fix for all of them.

The issue is that we’re double JSON parsing the payload in the query. The etag value started correctly encoded as \"foo\" but after the first parse it became "foo" and failed on the second parse.

Rather than using JSON_EXTRACT to narrow down the payload to an array and a UDF to parse the array, we should just pass the entire payload to the UDF and do all of the parsing there.

Modified @igrigorik’s query from earlier:

CREATE TEMPORARY FUNCTION getHeaders(payload STRING)
RETURNS STRING
LANGUAGE js AS """
  try {
    var $ = JSON.parse(payload);
    var headers = $.response.headers;
    var st = headers.find(function(e) { 
      return e['name'].toLowerCase() === 'server-timing'
    });
    return st['value'];
  } catch (e) {
    return '';
  }
""";

SELECT
  *
FROM (
  SELECT
    url,
    getHeaders(payload) AS st
  FROM
    httparchive.har.2017_07_01_chrome_requests )
WHERE
  st != ""

BigQuery

There are 357 requests with Server-Timing headers. I also modified that a bit to count the number of distinct pages and got 72, which is also what @paulcalvano got in the timeseries chart.

SELECT
  page
FROM (
  SELECT
    page,
    url,
    getHeaders(payload) AS st
  FROM
    httparchive.har.2017_07_01_chrome_requests )
WHERE
  st != ""
GROUP BY
  page
2 Likes

Ahh, doh! Kudos for tracking this down Rick.