Twitter and Facebook Profile Images: Already Optimized? Or Is There Room For Improvement?

In doing research on images on the web, I came to a surprising realization: Social media profile pictures are not always fully optimized! I had assumed that once a social media giant received your profile photo upload – they would optimize it to fully ensure a fast load time for their sites/apps/apis/etc. But, I kept seeing profile pictures that could be optimized on pages that host Tweets or Facebook posts, and I began to have my doubts….

Let’s take a quick look at the landscape:

Twitter:
Twitter profile pictures have the format:

https://pbs.twimg.com/profile_images/ 31933001664344064/GIpZRS_G_bigger.jpg

We can use the HTTPArchive to look for the url string “https://pbs.twimg.com/profile_images” in the requests for the top 1M desktop and mobile sites (Feb. 1, 2018 data sets), and we find a lot of results:

Mobile: 63,867

Desktop:73,988

As expected, most of these images are small. The 90th percentile of the mobile Twitter image requests is 10 KB:

image

I did not show the max value on this chart – because at 560 KB (!!), it would totally skew the y-axis of the graph. The 99.9th percentile is around 56KB – a bigger file for sure, but not huge for an image. So the number of large (in KB) Twitter images is a very small percentage (only 72 images over 50KB in the mobile data set of 63k images).

Facebook:
What kind of numbers do we see in Facebook’s profile images? To request a Facebook profile image, you must use a Facebook Graph query like the following:

https://graph.facebook.com/ 654267067/picture?type=normal (space added to keep the UI from rendering the image)

This will redirect to the image that you request, and deliver the profile image. We see a much smaller number of Facebook Graph requests in the HTTPArchive dataset:

Mobile 1807

Desktop 2386

Now, because these are redirects – these responses are all 0 bytes in size. In order to get the size of the image, we need to follow the redirect, and obtain the size of the file requested by the redirect:

  SELECT

    redirect.url,

    redirect.respSize

  FROM

    httparchive:runs.2018_02_01_requests_mobile redirect

  JOIN (

    //this query grabs the Facebook Graph request

    SELECT

      url,

      respSize,

      status,

      resp_location,

      pageid

    FROM

      httparchive:runs.2018_02_01_requests_mobile

    WHERE

      url CONTAINS "https://graph.facebook.com" && url CONTAINS "picture?type="

      AND respSize = 0

    ORDER BY

      respSize DESC ) initial

  ON

    //the JOIN matches the response location to the URL of the next 

    //request (on the same page)

    initial.pageid = redirect.pageid

    AND initial.resp_location = redirect.url

This lowers number of values in the data set slightly:

Mobile: 1671

Desktop: 2246

Again, just looking at the mobile image breakdown:

image

We see that the files grow larger than for Twitter (56KB at 90th percentile), but again – to the 90th percentile, there is nothing extraordinarily bad.

But again, I have left off the max image size, which came out at a whopping 734 KB – again, a completely unexpected value. (I admit, I think I pulled in other Facebook images into this search too. I could possibly refine this a bit more).

Gravatar:
Gravatar is a service that aims to be your “profile pic for everywhere.” These urls have the format:

https://s.gravatar.com/avatar/70633802ce502d2fa8b587c725227c06?s=80

The HTTPArchive has just 52 entries in the Mobile dataset for Gravatar images, and yet one is 343 KB (the rest are all under 30 KB).

So, it appears that there are some “big picture” problems here

The images from social media sites are not necessarily optimized, and we cannot just assume that they are “being taken care of by the 3rd party” like we might want to.

What are the steps we can take to optimize these huge images? From my HTTPArchive searches, I examined the largest Twitter and Gravatar images that I discovered In order to quickly optimize these two images, I used Cloudinary’s remote fetch URL. Cloudinary is a Cloud based image delivery network that allows you to perform on the fly transformations of images and quickly deliver them to your customers. In this case, I did 2 simple transformations: f_auto and q_auto:

F_auto tells Cloudinary to serve the best image format available to the end user. Since I am testing on Chrome, this often means that JPEG images will be transformed into WebP.

Q_auto is automatic quality tuning. When you reduce the quality of an image, you are losing pixels (and image quality). Since smaller images download faster, the auto quality term finds the perfect balance in quality (where the human eye sees no difference) and image size.

Comparing the before and after results:
image

It is really clear that for these huge images, optimization on social media photos will save several hundred KB of data – this is a no brainer!

It is also cool to note that Cloudinary has Facebook and Twitter specific APIs if you know the Facebook ID number or the Twitter username of the user (read more in the documentation).

But these are the outliers, right?
Ok, you’re absolutely right – there is no way that every social media image will get over 95% smaller like these two images did. So, as a test of pretty small social media images, let’s look at mine, and run them through the same transformations – save the files onto my computer, and compare the resulting files:

image
What I find is that even a simple transformation of a small image has large effects – 25-80% on files less than 10KB! It appears that if you use social media profile pictures on your website – perhaps image optimization of these images could shave a KB here or there for you. The delivery from Cloudinary is on the order of a few hundred ms slower, but once the files are generated and on the CDN, the difference in time is very small.

But, I hear the doubt in your head – there are over 60,000 of these – you can’t use a sample size of two to prove your point, Doug! Yes – sample size is important. So, I grabbed a random set of 1,000 Twitter urls from the HTTPArchive, and loaded them both as a regular image, and again using the same Cloudinary transformation. (I tested using Andy Davies’ WPT Bulk Tester.) I found that 67% of the images in my dataset had optimization potential – the other 33% were already optimized for their size.

Of the 673 images that could be improved, the median improvement was 717 bytes (the average was 2,613 bytes). The images already optimized ended up growing slightly during the optimization process (average of 1600 bytes/median of 1713 bytes).

Overall, the optimized images were an aggregate 1.76 MB smaller, while those images that were already optimized actually grew slightly (533 KB). The overall savings is still 1.23 MB of data over 1,000 images. If your application uses Twitter profile pictures, you may want to consider performing optimizations on the images delivered from Twitter.

image

Conclusion:
In this study, my assumption was that the social media giants like Twitter and Facebook had already tweaked user profile images for fast loading in their own properties, as they serve billions of these impressions every day. I found this not to be true, and found examples of very large social media profile pictures that exist in the wild (as seen in the HTTPArchive data). I further show that simple transformations using Cloudinary allow these images to drop to respectable sizes that will not impact page loads.

Finally, these same transformations also improve delivery of 67% of ‘regular’ profile pictures. If your site uses profile pictures from a 3rd party social media service, you may want to consider the use of image optimizations on these files in order to speed delivery of your content.

(Crossposted at dougsillars.com)

4 Likes