Hi there, the statistics you provide are awesome.
I’m wondering how they got. Sometimes we can see some swings which look like not really natural.
Is it because crawler was down for a bit or list of representatives were changed? Really interesting to know core of this behavior.
You’re right that some changes are not representative of the websites themselves but rather side effects of our testing environment.
For example in May 2017 we changed from Windows to Linux as the test agent OS. So that will have an effect on metrics like FCP.
Other times weird things happen temporarily like January 2018 that I don’t know/recall exactly what happened. Unless there’s a systemic issue in the pipeline, I don’t worry too much about these anomalies.
Many of the known anomalies are annotated in the chart. For example, if you hover over the “K” flag in your chart, it should note that it was the time we switched to Linux. We’re currently keeping track of these in the changelog.json file but we’re behind on some major things like the switch from 500K Alexa domains to 1.3M Chrome UX Report origins in July. This will definitely have a big effect on many metrics so it’d be good to annotate it in the charts.
Thank you so much for so detailed explanation