We’ve all been there before: You’re browsing a website that has a ton of huge images of delicious food, or maybe that new gadget you’ve been eyeballing. These images tug at your senses, and for content authors, they’re essential in moving people to do things.
Except that these images are downright huge. Like really huge. On a doddering mobile connection, you can even see these images unfurl before you like a descending window shade. You’re suddenly reminded of the bad old days of dial-up.
This is a problem, because images represent a significant portion of what’s downloaded on a typical website, and for good reason. Images are expressive tools, and they have the ability to speak more than copy can. The challenge is in walking the tightrope between visually rich content, and the speedy delivery of it.
The solution to this dilemma is not one dimensional. Many techniques exist for slimming down unruly images, and delivering them according to the capabilities of the devices that request them. Such a topic can easily be its own book, but the focus of this post will be very specific: Google’s WebP image format, and how you can take advantage of it to serve images that have all the visual fidelity your images have now, but at a fraction of the file size. Let’s begin!
What is WebP, and Why Should I Even Care?
WebP is an image format developed and first released by Google in 2010. It supports encoding images in both lossless and lossy formats, making it a versatile format for any type of visual media, and a great alternative format to both PNG or JPEG. WebP’s visual quality is often comparable to more ubiquitous formats. Below is a comparison of a lossy WebP image and a JPEG image:
In the above example, the visual differences are nearly imperceptible, yet the differences in file size are substantial. The JPEG version on the left weighs in at 56.7 KB, and the WebP version on the right is nearly a third smaller at 38 KB. Not too bad, especially when you consider that the visual quality between the two is comparable.
So the next question, of course, is “what’s the browser support?” Not as slim as you might think. Since WebP is a Google technology, support for it is fixed to Blink-based browsers. These browsers make up a significant portion of users worldwide, however, meaning that nearly 70% of browsers in use support WebP at the time of this writing. If you had the chance to make your website faster for over two-thirds of your users, would you pass it up? I think not.
It’s important to remember, though, that WebP is not a replacement for JPEG and PNG images. It’s a format you can serve to browsers that can use it, but you should keep older image formats on hand for other browsers. This is the nature of developing for the web: Have your Plan A ready for browsers that can handle it, and have your Plan B (and maybe Plan C) ready for those browsers that are less capable.
Enough with the disclaimers. Let’s optimize!
Converting your Images to WebP
If you’re familiar with Photoshop, the easiest way to get a taste for WebP is to try out the WebP Photoshop Plugin. After you install it, you’ll be able to use the Save As option (not Save For Web!) and select either WebP or WebP Lossless from the format dropdown.
What’s the difference between the two? Think of it as being a lot like the differences between JPEG and PNG images. JPEGs are lossy, and PNG images are lossless. Use regular old WebP when you want to convert your JPEG images. Use WebP Lossless when you’re converting your PNGs.
When you save images using the WebP Lossless format with the Photoshop plugin, you’re given no prompts. It just takes care of everything. When you choose regular old WebP for your lossy images, though, you’ll get something like this:
The settings dialogue for lossy WebP gives more flexibility for configuring the output. You can adjust the image quality by using a slider from 0 to 100 (similar to JPEG), set the strength of the filtering profile to get lower file sizes (at the expense of visual quality, of course) and adjust noise filtering and sharpness.
My gripe with the WebP Photoshop plugin is two-fold: There isn’t a Save for Web interface for it so that you can preview what an image will look like with the settings you’ve chosen. If you wanted to save a bunch of images, you’ve have to create a batch process. My second gripe probably isn’t a hurdle for you if you like batch processing in Photoshop, but I’m more of a coder, so my preference is to use something like Node to convert many images at once.
Converting Images to WebP with Node
Node.js is awesome, and for jack-of all-trades types such as myself, it’s less about the fact that it brings JavaScript to the server, and more that it’s a productivity tool that I can use while I build websites. In this article, we’re going to use Node to convert your JPEGs and PNGs to WebP images en masse with the use of a Node package called imagemin
.
imagemin
is the Swiss Army Knife of image processors in Node, but we’ll just focus on using it to convert all of our JPEGs and PNGs to WebP images. Don’t fret, though! Even if you’ve never used Node before, this article will walk you through everything. If the idea of using Node bugs you, you can use the WebP Photoshop plugin and skip ahead.
The first thing you’ll want to do is download Node.js and install it. This should only take you a few minutes. Once installed, open a terminal window, and go to your web project’s root folder. From there, just use Node Package Manager (npm) to install imagemin
and the imagemin-webp
plugin:
npm install imagemin imagemin-webp
The install may take up to a minute. When finished, open your text editor and create a new file named webp.js
in your web project’s root folder.
Type the script below into the file (updated for modern node by Luke Berry):
EDIT: Both of these snippets applied to earlier versions of imagemin-webp
. Recently, this tool has been updated to use native ESM in 7.0. Consult the README for implementation. You may need to replace require()
with import
. And specify type: module
in your package.json
.
Original Script
var imagemin = require("imagemin"), // The imagemin module.
webp = require("imagemin-webp"), // imagemin's WebP plugin.
outputFolder = "./img", // Output folder
PNGImages = "./img/*.png", // PNG images
JPEGImages = "./img/*.jpg"; // JPEG images
imagemin([PNGImages], outputFolder, {
plugins: [webp({
lossless: true // Losslessly encode images
})]
});
imagemin([JPEGImages], outputFolder, {
plugins: [webp({
quality: 65 // Quality setting from 0 to 100
})]
});
const imagemin = require('imagemin'),
webp = require('imagemin-webp')
const outputFolder = './images/webp'
const produceWebP = async () => {
await imagemin(['images/*.png'], {
destination: outputFolder,
plugins: [
webp({
lossless: true
})
]
})
console.log('PNGs processed')
await imagemin(['images/*.{jpg,jpeg}'], {
destination: outputFolder,
plugins: [
webp({
quality: 65
})
]
})
console.log('JPGs and JPEGs processed')
}
produceWebP()
This script will process all JPEG and PNG images in the img
folder and convert them to WebP. When converting PNG images, we set the lossless
option to true
. When converting JPEG images, we set the quality
option to 65
. Feel free to experiment with these settings to get different results. You can experiment with even more settings at the imagemin-webp
plugin page.
This script assumes that all of your JPEG and PNG images are in a folder named img
. If this isn’t the case, you can change the values of the PNGImages
and JPEGImages
variables. This script also assumes you want the WebP output to go into the img
folder. If you don’t want that, change the value of the outputFolder
variable to whatever you need. Once you’re ready, run the script like so:
node webp.js
This will process all of the images, and dump their WebP counterparts into the img
folder. The benefits you realize will depend on the images you’re converting. In my case, a folder with JPEGs totaling roughly 2.75 MB was trimmed down to 1.04 MB without any perceptible loss in visual quality. That’s a 62% reduction without much effort! Now that all of your images are converted, you’re ready to start using them. Let’s jump in and put them to use!
Using WebP in HTML
Using a WebP image in HTML is like using any other kind of image, right? Just slap that sucker into the tag’s
src
attribute and away you go!
<!-- Nothing possibly can go wrong with this, right? -->
<img src="img/myAwesomeWebPImage.webp" alt="WebP rules.">
This will work great, but only for browsers that support it. Woe betide those unlucky users who wander by your site when all you’re using is WebP:
It sucks, sure, but that’s just the way front end development is, so buck up. Some features just aren’t going to work in every browser, and that’s not going to change anytime soon. The easiest way we can make this work is to use the element to specify a set of fallbacks like so:
<picture>
<source srcset="img/awesomeWebPImage.webp" type="image/webp">
<source srcset="img/creakyOldJPEG.jpg" type="image/jpeg">
<img src="img/creakyOldJPEG.jpg" alt="Alt Text!">
</picture>
This is probably your best bet for the broadest possible compatibility because it will work in every single browser, not just those that support the element. The reason for this is that browsers that don’t support will just display whatever source is specified in the tag. If you need full support, you can always drop in Scott Jehl’s super-slick Picturefill script.
Using WebP Images in CSS
The picture gets more complex when you need to use WebP images in CSS. Unlike the element in HTML which falls back gracefully to the
element in all browsers, CSS doesn’t provide a built-in solution for fallback images that’s optimal. Solutions such as multiple backgrounds end up downloading both resources in some cases, which is a big optimization no no. The solution lies in feature detection.
Modernizr is a well-known feature detection library that detects available features in browsers. WebP support just so happens to be one of those detections. Even better, you can do a custom Modernizr build with only WebP detection at https://modernizr.com/download, which allows you to detect WebP support with very low overhead.
When you add this custom build to your website via the <script>
tag, it will automatically add one of two classes to the <html>
element:
- The
webp
class is added when the browser supports WebP. - The
no-webp
class is added when the browser doesn’t support WebP.
With these classes, you’ll be able to use CSS to load background images according to a browser’s capability by targeting the class on the tag:
.no-webp .elementWithBackgroundImage {
background-image: url("image.jpg");
}
.webp .elementWithBackgroundImage{
background-image: url("image.webp");
}
That’s it. Browsers that can use WebP will get WebP. Those that can’t will just fall back to supported image types. It’s a win-win! Except…
What About Users with JavaScript Disabled?
If you’re depending on Modernizr, you have to think about those users who have JavaScript disabled. Sorry, but it’s the way things are. If you’re going to use feature detection that can leave some of your users in the dark, you’ll need to test with JavaScript disabled. With the feature detection classes used above, JavaScript-less browsers won’t even show a background image. This is because the disabled script never gets to add the detection classes to the <html>
element.
To get around this, we’ll start by adding a class of no-js
to the <html>
tag:
<html class="no-js">
We’ll then write a small piece of inline script that we’ll place before any other scripts:
<script>
document.documentElement.classList.remove("no-js");
</script>
This will remove the no-js
class on the <html>
element when parsed.
So what good does this do us? When JavaScript is disabled, this small script never runs, so the no-js
class will stay on the element. This means we can can add another rule to provide an image type that has the widest support:
.no-js .elementWithBackgroundImage {
background-image: url("image.jpg");
}
This covers all our bases. If JavaScript is available, the inline script is run and removes the no-js
class before the CSS is parsed, so the JPEG is never downloaded in a WebP-capable browser. If JavaScript is indeed turned off, then the class is not removed and the more compatible image format is used.
Now that we’ve done all of this, these are the use cases we can expect:
- Those who can use WebP will get WebP.
- Those who can’t use WebP will get PNG or JPEG images.
- Those with JavaScript turned off will get PNG or JPEG images.
Give yourself a hand. You just learned how to progressively use WebP images.
In Closing
WebP is a versatile image format that we can serve in place of PNG and JPEG images (if it’s supported.) It can yield a substantial reduction in the size of images on your website, and as we know, anything that results in transferring less data lowers page load time.
Are there cons? A few. The biggest one is that you’re maintaining two sets of images to achieve the best possible support, which may not be possible for your website if there’s a huge set of imagery that you need to convert over to WebP. Another is that you’ll have to manage a bit of JavaScript if you need to use WebP images in CSS. Another notable one is that users who save your images to the disk may not have a default program set up to view WebP images.
The takeaway is that the relatively low effort is worth the savings you’ll realize, savings that will improve the user experience of your website by allowing it to load faster. Users browsing via mobile networks will benefit especially. Now go forward and WebP to your heart’s content!
Wouldn’t it be possible to use the @support to override the default background image with the webp image instead of relying on modernizer?
Or would it figure it’s supported, just ignoring the image provide in the url()?
No, @supports only checks if the CSS engine supports a given property/value pair (and in some cases even lying about it).
When I attempted to do the multiple background trick, I believe it ended up downloading the WebP source anyway– and failing. So it was sort of suboptimal that way. I’d have to look back and check, but there was a specific reason I didn’t go that route when I first wrote this article on my blog back in April.
Thanks for reading!
Also make sure the HTTP server supports webp images, correctly sending image/webp as the content type.
It happened to me that I happily developed a site with webp images for Blink-based browsers, and couldn’t see anything when it went online because they were served as application/octet-stream
Yes! I haven’t had to do this myself, but some configurations may require it. If you needed to do this in Apache for one reason or another, this should work:
Thanks for bringing this up! :)
full disclosure: I wrote this :)
Here is how you can use Nginx to serve Webp and JXR images to Microsoft Edge. Also how to convert to a jxr through photoshop or command line
It uses content negotiation, so you you can continue using regular image tag with/without srcset, and has the original solutions for Webp images for Apache and Nginx that Ilya Grigorik and Eugene Lazutkin came up with!
If it helps anyone, I wrote a WordPress plugin to handle the conversion of images to WebP during upload. It doesn’t make the front-end load these, you’ll need to do that in your theme, but it’ll convert all images including custom image sizes.
https://github.com/randyjensen/rj-webp-converter
Pretty helpful! But be careful with regular expressions; they can be a bit tricky. You should rewrite instances of
/(.jpg|.png)/
to/\.(jpe?g|png)$/i
— a
.
just means “anything”, not necessarily a dot; you need to backslash it to be taken literally—
jpe?g
will match bothjpg
andjpeg
— the
$
makes sure the match is only at the end (otherwisejpgofacat.jpg
would becomeofacat.web
when you runpreg_replace
)— the
i
at the end makes it case insensitiveI also recommend checking out PHP’s native WebP capabilities: http://php.net/manual/en/function.imagecreatefromwebp.php They aren’t necessarily going to be as fast as running
cwebp
from the server, butexec()
is an incredibly dangerous PHP function to leave enabled, so any chance to drop that dependency is worth taking.Excellent points Josh. Thanks!
I haven’t updated anything with that plugin in over two years so I think this is a good time to revisit it.
Don’t quote me on this, but I think you can limit the amount of access to other folders in the system by setting the
open_basedir
php.ini directive: http://php.net/manual/en/ini.core.php#ini.open-basedirSo you may still be able to somewhat safely use
exec
orshell_exec
if you set this, but don’t take my word for it, test it out. I still disable both of these functions in my configuration, FWIW.Ya, exec definitely needs to go. It looks like http://php.net/manual/en/function.imagewebp.php is what I need. This will allow me to get rid of the libwebp dependency and get the plugin in to the official WordPress repo. This is what happens when you don’t touch a library for 2 years.
open_basedir
restrictions only apply to PHP.exec()
happens outside PHP (which is why it is dangerous). Those commands are subject to the usual r/w/x permission restrictions imposed by the file system, but that’s about it.A better way to marry PHP and System is the
proc
family of functions (e.g. http://php.net/manual/en/function.proc-open.php ). Here, PHP is actually in charge of launching and managing specific system processes, soopen_basedir
restrictions will apply. Rather than whitelist the whole of/usr/bin/
or anything crazy like that, you can add individual binaries like/usr/bin/gpg
.proc
is the best solution for individual projects or PEAR/PECL packages, but is a bit much for a general use WordPress function. Native PHP is probably all you can get away with in that context. :)Hi Jeremy – Do you have any data indicating that using webp reduces page load times? I wouldn’t assume that it does without testing it extensively. webp is much slower to decode than jpeg, so the fact that the images are smaller doesn’t necessarily mean you’ll get faster page loads. A web page isn’t a blob of uniform data, that decodes and renders at a uniform rate. These are all completely different formats with different codecs. I tried getting real-world webp performance data from Google, but so far they won’t release any. There’s not much point in using webp without having detailed performance data showing a benefit, unless the goal is simply to reduce bandwidth charges regardless of page performance.
Decode times are trivial. With modern processing, there’s no reason for why decoding any image, even WebP images, should take so long that a significant reduction in data transferred over the wire would be a useless endeavor.
I wrote about this in my book, and through an example, I was able to reduce load times by about 20%-35%, depending on the DPI of the screen involved. If you’re interested in this chapter, I can send you a copy for free, just DM me on my twitter account. If you can’t, let me know what your handle is. I’ll follow you so you can get in touch.
The best way to get real world performance data is to create your own in this case. I use WebP on my blog for all raster images, and I would never recommend it to anyone if I didn’t feel it wasn’t good for performance. Can it be time consuming to implement for existing content? Sure. But I think the result is worth the effort if you can justify it.
Thanks for reading!
Jeremy, this part seems off:
“Decode times are trivial. With modern processing, there’s no reason for why decoding any image, even WebP images, should take so long that a significant reduction in data transferred over the wire would be a useless endeavor.”
If I’m reading you right, you’re saying that slower decoding can’t offset smaller file size in terms of combined time to download + time to decode/rasterize? That’s definitely not going to true for all sorts of formats and scenarios. If that’s a widespread belief, we need to educate people. The basic math will be similar to what Cloudflare posted here when analyzing brotli: https://blog.cloudflare.com/results-experimenting-brotli/
If we’re looking at client side, then the math is download time + decode time. If the baseline is a 100 KiB JPEG that takes 10 ms to decode, on a 10 Mibit/sec connection, it would be 80 ms to download + 10 ms for decode = 90 ms total.
If we serve a webp instead, say it’s only 60 KiB now, but it takes five times as long to decode, then it’s 48 ms download + 50 ms to decode = 98 ms total.
That kind of result can easily happen for all sorts of codec changes. If we switched from gzip to xz for example (for all the text files), we’d have smaller download sizes but the slower decompression would easily outweigh the download savings, which is why we don’t do it. webp is much slower to decode than jpeg and png. How much slower is a mystery, since Google won’t release valid data. Tencent reports that it’s 4 or 5 times slower than PNG: https://isux.tencent.com/introduction-of-webp.html Note than PNG will in turn be slower than JPEG on almost any device (JPEG is magic basically, and lots of devices have hardware-accelerated JPEG decoding, or even dedicated “fixed-function” JPEG decoder chips.
Since they’re promoting the format, and since they have so many resources and dollars, I expect Google to fully document the webp format and to produce rich, valid performance data. It’s very strange that they’ve refused to do so. Webp doesn’t even have a spec or a standard, and it’s pretty buggy at this point. It’s risky to put valuable assets into a format that doesn’t have a spec and where a new decoder version can destroy files encoded using a previous version (that happened last year with webp). It’s also quite possible that webp is slowing page load time — that it’s so much slower to decode that it offsets the compression savings. This seems more likely with their lossy modes than their lossless, but know one seems to know for sure. There was a thread here on the webp list where I reviewed the “evidence” Google tried to pass off (the Tencent study from above was the only performance data in all their links): https://groups.google.com/a/webmproject.org/forum/#!searchin/webp-discuss/useful$20performance$20data/webp-discuss/4r6frraRtkg/nNdTEitlMwAJ
People shouldn’t use stuff just because Google tells them too. They need to provide evidence, and they should stop being so creepy about trashing MozJPEG all over the web. They’re behavior is strange right now (https://blog.cloudflare.com/experimenting-with-mozjpeg-2-0/ https://medium.com/@duhroach/reducing-jpg-file-size-e5b27df3257c#.oj7an6ffs)
Here’s what I’ve found in my analysis of how Chrome seems to behave when I use WebP, and bear in mind, this is one test case, and is by no means exhaustive:
I have two images on a server:
https://jeremywagner.me/img/global/stpaul-2x.jpg
https://jeremywagner.me/img/global/stpaul-2x.webp
Both images are a shot of Downtown Saint Paul (because Minnesota.) The source is a JPEG that I found on Google Images. It’s 126 KB, and this is after the JPEG has been processed using imagemin-jpeg-recompress. The WebP version is a lossy WebP with a quality setting of 65, and it weighs in at 59.8 KB. These figures are from Chrome’s network panel, so they ostensibly include HTTP headers in the final size. These images are pretty comparable in terms of visual quality.
So let’s talk about decode time. We can hem and haw about decode times and how trivial/important it is. To me, it’s trivial. Because once I have an alternative image format in hand, and I know it’s reasonably smaller, all I care about is how long the browser takes to begin painting the image. Using Chrome’s timeline tool, here’s what I see for first paint times using Google Chrome’s “Good 3G” network throttling profile (1.5Mb/s) over ten trials for each image type (with caching disabled):
JPEG: 455.85ms
WebP: 320.63ms
This is about a 30% improvement. Of course, this test is for a single image, accessed directly outside of the context of a web page. But I think it’s a reasonably decent demonstration that decode time just isn’t that big of a factor. Remember that the source for your decode time data was published in 2014– Browser internals can change a lot in two years. If those figures are still accurate today, then their impact diminishes as connections get slower. Internet infrastructure and connectivity quality is still a problem in developing nations. When I test for speed, I test how people on the slowest connections may be affected. So I still routinely test on 2G and 3G simulations.
WebP is not a panacea. Neither is JPEG, GIF, SVG, or PNG. We have all of these different formats, and they’re good for all kinds of content. There are a few cases when I losslessly encoded PNGs to WebP, and the WebP format is actually larger. In those cases, I discard them and use the PNG image. Most of the time when I use WebP though, the result is comparable image quality in a significantly smaller image file. FWIW, I don’t work for Google, and I’m not beholden to them. I think WebP is a great contender format, and I’m going to recommend it where it makes sense.
WebP can have an astounding impact on load times under realworld conditions like a CMS, where the average user spends no time optimizing graphics for the web whatsoever. I recently updated a client’s WordPress site to automatically generate WebP images from the sources they upload and serve everything wrapped in a
<picture>
element (backgrounds too, usingobject-fit
and a polyfill). Across some 4000 existing images, a mixture of JPEG, PNG, and GIF, the WebP counterparts (generated using only the-jpeg_like
flag) ended up 59% smaller. (The server was already losslessly compressing uploaded images, with an average savings of about 10-15% for JPEGs and 50-75% for PNGs; the 59% WebP figure is in addition to that).This, of course, increased the document sizes a bit because of the extra
<picture>
markup and theobject-fit
polyfill, but gzip and http/2 optimizations make text a fairly small deal. There was also extra i/o overhead because of the filesystem checks for WebP sister images (particularly with srcsets, since there are that many more sources to check), causing the document compilation times to increase. But even so, page loads ended up being on average about 40% faster. WebP savings significantly outpaced document bloat and i/o, so the more images on a page, the higher the savings ended up being.Nobody has reported any decode-related lag. I suspect machines that are susceptible to that are not going to be running supported browsers anyway.
@Josh:
” (The server was already losslessly compressing uploaded images, with an average savings of about 10-15% for JPEGs and 50-75% for PNGs; the 59% WebP figure is in addition to that).”
This statement raises a lot of questions. First of all, there is no such thing as losslessly compressing a JPEG. A JPEG is always lossy. Even if you open one and save it without making any changes, it is lossy. I assume you mean that they were being compressed without an immediate visual loss of quality, but that’s not the same as lossless. It’s just a “safe” compression level.
If I read further, my interpretation is that next you achieved an additional 59% savings due to the use of WebP on top of JPEGs that were compressed 15-20% in size. This can mean a whole lot of things.
First, the so-called “losless” compression of JPEGs can mean they were not well compressed at all in the first place. So it could mean that if you’d set the JPEG quality level to 60-70% (the sweet spot), you can actually win up to 70% in file size. In that case you would push the JPEG format to its edge.
I do not belief that you managed to achieve an additional 59% savings on top of that. WebP versus maximum usable compression of JPEG surely does not lead to a 59% overall saving. What you managed instead is to save 59% of a JPEG that was barely compressed at all, which isn’t a fair comparison.
I should amend my last comment to stipulate that i/o-related lag, whether in generating the images in the first place or checking to see whether they exist, largely depends on the server. I wouldn’t attempt any disk- or db-intensive optimizations on a shared host without some serious cache in place, as chances are it will already be struggling enough just to render the basic theme.
@Ferdy,
JPEG compression in this case consists almost entirely of removing metadata from the image (i.e. the non-image bits). End users tend to upload images straight off the camera, so to speak, without any pre-upload effort spent on making more web appropriate versions. Metadata has its uses, sure, but for this particular site it is at best bloat and at worst a security risk, so the server does what the user does not and removes it. The relative percentage savings from this operation depends on the metadata:imagedata ratio. Even a tag-crazy person won’t be able to make much of a relative dent into a high-resolution photo, but some cheap low-res stock photo from Getty? The metadata will account for a larger percentage of the original file size. The particular site I mentioned tends to upload images somewhere in between the two extremes, so removing metadata saves about 10-15% on average.
But you are incorrect in saying that the JPEG container format leaves no room for lossless compression. It is true there isn’t much wiggle room, but there is wiggle room nonetheless. There are tools like jpegrescan that can squeeze out an extra 1-2% savings. Sometimes there is also room for improvement by making a JPEG progressive (if nothing else, they have a more friendlier decode display).
To your other comment about the images not being well-compressed in the first place, of course they weren’t. We are in 100% agreement. I used the phrase “realworld conditions” to describe the real world. The average computer user has no understanding of the differences between JPEG and PNG, let alone the configuration options embedded within each individual format. Lossless compression can only optimize the encoding of what’s already there; it can never compete with just saving an image with appropriate configurations to begin with. ;)
So to reiterate, the images mentioned in my post were ones that were already there, uploaded by the client (plus the various thumbs generated by the CMS), and losslessly cleaned up by an automated server process. In this “realworld” use case, using the
-jpeg_like
flag on thecwebp
binary produced files that were (lossily this time) 59% smaller than the compressed source. The WebP and source graphics are not 100% identical-looking. WebP has a slight gloss over certain types of texture that you can spot if you see two versions side-by-side. But the effects take humans into consideration and so aren’t used in ways that we’d be likely to notice if we didn’t have a comparison to make.I didn’t mean to imply that WebP will always achieve 59% savings, only that it can perform even better than others were mentioning when you start with a more common (i.e. crappy) source. I implemented a similar WebP solution on one of my own web sites, but the source material there was better optimized to begin with, JPEG-only, and the images were qualitatively different than those used by the client (halftone filtered photographic content… a killer for both WebP and WebM optimization), so I only ended up saving about 10%. Haha.
@Josh
59% seems realistic to me, for unoptimised source images.
By “losslessly”, do you mean something like imagemagick convert’s “-quality 100” option? This could actually increase the source’s size … Or was it more like “-quality 75” (no big visual difference)? (keeping in mind that every app may define its own quality scale, so the meaning of “quality 75” is different in gimp, photoshop, imagemagick, …)
@Ferdy
Actually, there seems to be: convert’s “-quality 0” or “-quality 100”. I dont’t really understand the difference. 100 usually increases file size; 0 seems lossless to me, reduces file size on high quality images, will increase size of previously down-sampled images (-quality 35); 1 is worst quality setting for jpg in imagemagick.
@Valentin,
No, I’m not referring to the quality setting of the JPEG. That falls under “lossy”, even if the end result is literally bigger (like converting from 85 to 100). Though as @Ferdy and I were discussing, experimenting with a JPEG quality setting is the single biggest thing one can do toward web optimization. But for WordPress in particular, the quality setting of the source graphic is moot, since typically the site will only serve the auto-generated thumbnails to visitors, which are saved at a set quality and don’t really inherit any benefit from the source’s disk size. The only real benefit for WP to uploading an optimized source is that it will be able to generate the thumbs more quickly.
For lossless JPEG compression, I run two programs: jpegoptim and jpegrescan.
jpegoptim stips metadata and makes the JPEG progressive. Metadata tends to be the biggest savings since image programs and cameras and whatnot can inject a ton of data there. Progressiveness can also result in some disk savings, though not always.
jpegrescan looks for inefficiencies in the actual compression and will make improvements where it can. Its impact depends on the program that saved the image in the first place.
Good article, but I do think it skips over the most important part, which is savings vs quality. The example of the small thumbnail is very beneficial to webp, indeed the size is smaller and you can’t really see any difference.
It is a mistake though to take that as a conclusion that one should always try to serve all images as webp where possible, which is implied.
For example, as soon as you turn your eye to larger image sizes, quality differences become a lot more visible, and the sizing differences as well. WebP does tend to perform well in large images, yet also has huge issues, such as the aggressive smoothing of skin on photos of people.
Not directly aimed at the author of the article, but the general conclusion I read everywhere as webp being roughly 30% smaller is absolutely not a minimum, it is more like a maximum. The same goes for the conclusion that they offer similar quality, this hugely depends on the actual content.
So, in many cases, the advantage is far smaller than 30%, which leaves me to wonder…why bother? I’m not anti webp, I just think it’s oversold.
Whatever happened to WebPJS? https://webpjs.appspot.com/ It seems to be abandoned. Is there any better WebP JavaScript implementation out there?
@Josh: Thanks, you cleared up a lot of things. So to reiterate your process:
user uploads original (with exif/meta, and typically not compressed)
automated process removes metadata, saving 15-20%
you run a lossy webp compression, saving 59% in addition
Sounds good, especially if the quality of output is found acceptable. But it still leaves open the question: what if step 3 was a lossy JPEG compression? How much would you save then? Would it be more or less than 59%, and what would it do to visual quality at different compression levels?
You don’t need to answer that, but I think that is the true question of webp vs jpeg compression. And to complicate things further, indeed the type of content matters as well :)
Saving an image correctly for the web is a good idea, but there isn’t really a magic quality setting; it depends a lot on the subjective eye and the content of the image. Pictures with sharp contrasting colors might have noticeable artifacts at even 95% (unless you disable chrome subsampling), while a desert landscape might be totally fine as low as 50%. All of the fancy image techniques employed on this particular project are ones that can safely be left to do their thing unsupervised.
WordPress uses a quality setting of 85% by default when making thumbnails. That seems to be a pretty good baseline for most types of pictures. Most themes should be using custom thumbnail sizes just about everywhere on a site, in which case the visitor is always benefiting from lossy compression, regardless of what the user originally uploaded.
But one place where this practice seems to be ignored are situations where an image has to expand indefinitely to cover some area, like a large hero. Good old
background-cover
. In fact, swapping out CSS backgrounds with responsiveobject-fit
<picture>
elements on my client’s site was what made the single biggest impact in terms of overall page size. This would have been the case even without WebP being involved in that chunk, though WebP inched it along that much further. The markup is a bit annoying and a polyfill had to be rewritten and tested, but the payoff was worth it.@Josh.
It’s absolutely true that the content of the image decides which quality level you can get away with. At work though, we did not have the luxury to do this per image so we were forced to set an overall quality level. Through some extensive testing on large sets of images we managed to get it down to 60-65%. This is right on the absolute tipping point of artifacts. Note that the type of images were photographs of people using the product. Still we’re using this pretty low quality level even on large masthead images without complaints. If you look at the JPEG quality level vs file size charts, 60% is quite the optimum. Note that this curve is far from linear.
So in that light, I consider 85% very conservative. Especially for thumbnails, depending on their size, where you can even experiment with going below 60% since artifacts are often too small to notice.
Finally, I like to point out a technique that is quite under appreciated:
https://www.netvlies.nl/tips-updates/design-interactie/retina-revolution
So this technique uses very large images (in terms of resolution) and then uses very aggressive compression on them. Next, it lets the browser resize the image. The benefit is that in many cases, you can serve a single image for all, have support for retina, without having an increase in file size. The best of all worlds?
Main criticism is that it causes an increase in memory usage when these larger-than-needed images are decoded on the client. In practice, I have found that claim to not cause any real world issues as I have been using this technique for years on one of my websites.
How about using the Google pagespeed module to convert to webp only if the browser supports it? :)
https://developers.google.com/speed/pagespeed/module/filter-image-optimize
@Jeremy Those numbers look good. When you say “once I have an alternative image format in hand, and I know it’s reasonably smaller, all I care about is how long the browser takes to begin painting the image”, that’s close to what I care about too. You juxtaposed it against decode time, but I’d bundle all that together, so I should have made that clear.
The bottom line for me is page render time, including whatever images we need to render. So my only quibble with your metric is that I don’t care so much about how long the browser takes to begin painting the image, but rather how long it takes to finish rendering all of them and the page overall. One might be a good proxy for the other – I’m not sure.
The tencent data is from 2014, but the Google data is even older – 4 and 5 years old. I’m puzzled by why Google doesn’t release any valid data, and I was appalled by the evidence they tried to pass off when I asked for data (just a bunch of CDN puff pieces talking only about file sizes), so I might be biased from my annoyance with them. webp also makes me nervous because there’s no spec, and Google’s development and testing process seems casual and unsystematic – bad bugs crop up randomly, they don’t release automated test suites with good test images. More broadly, their projects usually have a Mac monoculture, and they end up not testing enough on Windows machines, so I expect surprises when it comes to webp and Windows. The fact that their lossy conversion process and API are so poorly documented make me cautious about converting assets to webp, even if the JPEGs and TIFFs should still exist – I don’t like having assets in sketchy formats.
A healthy amount of skepticism is good, and I know this is a pro-WebP piece, but that comes from the success I’ve had in applying it. If a WebP image is significantly smaller than its JPEG or PNG equivalent, it will get a head start on painting well before the conventional formats load, which, in my experience, offsets any disadvantage in decoding. This is especially true on very slow connections like you’ll find in developing nations. Speaking of which, WebP can be an asset to internet users reliant on restricted data plans, so speed isn’t necessarily the only aspect worthy of our consideration.
I agree that Google should provide some official metrics on its performance, but on the other hand, the format is also out there for anyone to use. Anyone willing to experiment can draw their own conclusions, provided they have the knowledge and ability to rigorously test (which I freely admit can be a barrier for less experienced developers.) Where Windows is concerned, I haven’t run into any problems in regard to WebP. I develop my side projects on a Macbook Pro at home, and I use a Windows machine at work, so I have a good balance between the two environments on a daily basis.
On that note, libwebp is also open source: https://chromium.googlesource.com/webm/libwebp From what I can see, the encoder and decoder source is available to peruse. I don’t necessarily buy into the idea that they’re being nakedly evasive. I don’t write C or C++ for a living, so I can’t account for every nook and cranny of the source, but it’s not as if Google is offering WebP as though it’s Soylent Green. More probable (at least to me) is that they have a bevy of high-profile projects going on at any one time, and WebP is low on their list of priorities.
There is also a container specification for WebP available at https://developers.google.com/speed/webp/docs/riff_container that might interest you. Though if you’ve bugged Google sufficiently, I can’t imagine that they haven’t sent this to you, so maybe this spec isn’t to your satisfaction.
Hope this helps. Thanks for challenging my assertions. I think the back and forth provides for some good reading outside of the scope of the article. :)
-j
@Ferdy JPEGs can be losslessly compressed. The fact that JPEG is a lossy format doesn’t mean that any given JPEG can’t be losslessly compressed (beyond its prior lossy compression or creation). Beyond the metadata, the quant tables can be optimized, and that’s one of things MozJPEG does. It usually comes down to taking the existing encoding (which was lossy) and finding ways to optimize that encoding.
@Josh, the I/O-related lag might be the biggest issue with using webp, according to this data from a Smashing article: http://mattshull.com/webptests.pdf
I was surprised to see the iPhone times deteriorate just from using webp for other browsers. Nothing should have changed for the iPhone, since it was still receiving JPEGs in tests C and D (because Safari doesn’t support webp). It might have been an htaccess-related lag. I think an important issue gets lost when people talk about webp and other improved compression formats. Deploying a new format not only changes the browser’s behavior – it changes the server’s behavior. It adds code, logic, and complexity to the server, and possibly adds lag time. It should be possible to eliminate most of that lag, but by default it looks like having Apache serve webp files to some browsers, and JPEGs to others, really slowed things down. This will take more digging and research.
As you mentioned, there’s a lot of room for JPEG optimization. It’s not clear to me that webp-equivalent size reductions can’t be achieved by using better JPEG tools like JPEG-Recompress and JPEGMin, both of which appear to be using the smallfry algorithm (see: https://github.com/danielgtaylor/jpeg-archive)
@Jeremy when I try to run the imagemin-webp plugin with node webp.js I get the following error:
C:\work\webp\webp.js:7
new imagemin().src(PNGImages).dest(outputFolder).use(webp({
^
TypeError: (input, output, opts) => {
if (!Array.isArray(input)) {
return Promise.reject(new TypeError(‘Expected an arr……
} is not a constructor
at Object. (C:\work\webp\webp.js:7:1)
at Module._compile (module.js:409:26)
at Object.Module._extensions..js (module.js:416:10)
at Module.load (module.js:343:32)
at Function.Module._load (module.js:300:12)
at Function.Module.runMain (module.js:441:10)
at startup (node.js:139:18)
at node.js:974:3
Any idea what the problem is?
Dave, I’m sorry! I saw this, but got sidelined. If you haven’t figured this out yet, just send me an email to [email protected], or bug me on Twitter @malchata and send me your code. I’ll look it over. Seems to me like it should be something simple that’s being overlooked.
Thanks for reading, man. :)
Hi, Jeremy. Thank you for the article, it’s pretty good. I’m already using Webp images in my projects and I can see a trouble with using Webp as background. For example, I have header.png for .no-js and .no-webp and header.webp for .webp classes of HTML-tag. At first I thought that images downloading works exactly as described in your article, but then I checked page from mobile internet and noticed that background images downloads twice – a PNG version at first and Webp after. I looked into Network tab in my Developer Tools and saw that my suspicion confirmed – there were two downloaded background images. PNG starts to download as a background for .no-js, then .webp class is added to HTML-tag and Webp-image starts to download again. So, maybe there is any workaround / another way to use Webp as background from CSS?