Pardon the newbie question, but is it possible to download a .har file containing (a sampling of) the request/response headers from a crawl? I see that MySQL and BigData are available for queries, and har files are referenced in names in queries, but I haven’t seen a URL for the .har files themselves.
If you have a test ID you can download it from google storage: https://storage.googleapis.com/httparchive/chrome-Feb_1_2018/180201_0_10P.har.gz
You can also use gsutil at the command line to copy a range of HAR’s (I believe the bucket is public so you can list the directories). Something like gsutil -m cp gs://httparchive/chrome-Feb_1_2018/* .
If you want to look at a specific test you can go to the urls in the HTTP Archive UI, find the page you are interested in, click to go to the WebPageTest filmstrip then click on the label for the test to the left of the filmstrip. That will take you to the WebPageTEst test for that run and there will be a link to export the HAR.