I think the article could be better and get the point across with half the length and without the second half of it being full of ai generated list of advantages, or using that space to give some more technical information
Did you miss the point of the article? JPEG-XL encoding doesn't rely on quantisation to achieve its performance goals. Its a bit like how GPU shaders use floating point arithmetic internally but output quantised values for the bit depth of the screen.
The cliff notes version is that JPEG and JPEG XL don't encode pixel values, they encode the discrete cosine transform (like a Fourier transform) of the 2d pixel grid. So what's really stored is more like the frequency and amplitude of change of pixels than individual pixel values, and the compression comes from the insight that some combinations of frequency and amplitude of color change are much more perceptible than others
In addition to the other comments: you can have an internal memory representation of data be Float32, but on disk, this is encoded through some form of entropy encoding. Typically, some of the earlier steps is preparation for the entropy-encoder: you make the data more amenable to entropy-encoding through rearrangement that's either fully reversible (lossless), or near-reversible (lossy).
Interesting approach. It doesn't even introduce an extra rounding error, because converting from 32-bit XYB to RGB should be similar to converting from 8-bit YUV to RGB.
However, when decoding an 8-bit-quality image as 10-bit or 12-bit, won't this strategy just fill the two least significant bits with noise?
Could be noise, but finding a smooth image that rounds to a good enough approximation of the original is quite useful. If you see a video player talk about debanding it is a exactly that.
I don't know if JPEG XL constrains solutions to be smooth.
Mozilla also isn't interested in supporting it, it's not just Google. I also often see these articles that tout jpeg-xl's technical advantages, but in my subjective testing with image sizes you would typically see on the web, avif wins every single time. It not only produces fewer artifacts on medium-to-heavily compressed images, but they're also less annoying: minor detail loss and smoothing compared to jpeg-xl's blocking and ringing (in addition to detail loss; basically the same types of artifacts as with the old jpeg).
Maybe there's a reason they're not bothering with supporting xl besides misplaced priorities or laziness.
Mozilla is more than willing to adopt it. They just won't adopt the C++ implementation. They've already put into writing that they're considering adopting it when the rust implementation is production ready.
Seems like the normal usage to me. The post above lists other criteria that have to be satisfied, beyond just being a Rust implementation. That would be the consideration.
Mozilla indicates that they are willing to consider it given various prerequisite. GP translates that to being “more than willing to adopt it”. That is very much not a normal interpretation.
> To address this concern, the team at Google has agreed to apply their subject matter expertise to build a safe, performant, compact, and compatible JPEG-XL decoder in Rust, and integrate this decoder into Firefox. If they successfully contribute an implementation that satisfies these properties and meets our normal production requirements, we would ship it.
So you think it's silly to not want to introduce new potentially remotely-exploitable CVEs in one of the most important pieces of software (the web browser) on one's computer? Or are you implying those 100k lines of multithreaded C++ code are bug-free and won't introduce any new CVEs?
It’s crazy how people think using Rust will magically make your code bug and vulnerability free and don’t think that the programmer more than the languages contribute to those problems…
It's crazy how anti-Rust people think that eliminating 70% of your security bugs[1] by construction just by using a memory-safe language (not even necessarily Rust) is somehow a bad thing or not worth doing.
> and don’t think that the programmer more than the languages contribute to those problems
This sounds a lot like how I used to think about unit testing and type checking when I was younger and more naive. It also echoes the sentiments of countless craftspeople talking about safety protocols and features before they lost a body part.
Safety features can’t protect you from a bad programmer. But they can go a long way to protect you from the inevitable fallibility of a good programmer.
It's not about being completely bug free. Safe rust is going to be reasonably hardened against exploitable decoder bugs which can be converted into RCEs. A bug in safe rust is going to be a hell of a lot harder to turn into an exploit than a bug in bog standard C++.
Multiple severe attacks on browsers over the years have targeted image decoders. Requiring an implementation in a memory safe language seems very reasonable to me, and makes me feel better about using FF.
I did some reading recently, for a benchmark I was setting up, to try and understand what the situation is. It seems things have started changing in the last year or so.
JPEG XL seems optimally suited for media and archival purposes and of course this is something you’d want to mostly do through webapps nowadays. Even relatively basic uses cases like Wiki Commons is basically stuck on JPEG for these purposes.
For the same reason it would be good if a future revision of PDF/A would include JPEG XL, since it doesn't really have any decent codecs for low-loss (but not losless) compression (e.g. JPEG sucks at color schematics/drawings and losless is impractically big for them). It did get JP2 but support for that is quite uncommon.
I wish they separated the lossless codec into "WebPNG." WebP is better than PNG, but it's too risky to use (and tell people to use) a lossless format that is lossy if you forget to use a setting.
So they "ignore" bit depth by using 32 bits for each sample. This may be a good solution but it's not really magic. They just allocated many more bits than other codecs were willing to.
It also seems like a very CPU-centric design choice. If you implement a hardware en/decoder, you will see a stark difference in cost between one which works on 8/10 vs 32 bits. Maybe this is motivated by the intended use cases for JPEG XL? Or maybe I've missed the point of what JPEG XL is?
I completely agree. Based on my limited experience with image upscaling, downscaling, and superresolution, saving video at a lower resolution is the second crudest way of reducing the file size.
The crudest is downsampling the chroma channel, which makes no sense whatsoever for digital formats.
So 2^32 bit depth? 4 bytes seems an overkill.
Sorry I missed. How is the "floating point" stored in .jxl files?
Float32 has to be serialized one way or another per pixel, no?
However, when decoding an 8-bit-quality image as 10-bit or 12-bit, won't this strategy just fill the two least significant bits with noise?
I don't know if JPEG XL constrains solutions to be smooth.
Maybe there's a reason they're not bothering with supporting xl besides misplaced priorities or laziness.
Mozilla is more than willing to adopt it. They just won't adopt the C++ implementation. They've already put into writing that they're considering adopting it when the rust implementation is production ready.
https://github.com/mozilla/standards-positions/pull/1064
> To address this concern, the team at Google has agreed to apply their subject matter expertise to build a safe, performant, compact, and compatible JPEG-XL decoder in Rust, and integrate this decoder into Firefox. If they successfully contribute an implementation that satisfies these properties and meets our normal production requirements, we would ship it.
That is a perfectly clear position.
[1] - https://www.chromium.org/Home/chromium-security/memory-safet...
This sounds a lot like how I used to think about unit testing and type checking when I was younger and more naive. It also echoes the sentiments of countless craftspeople talking about safety protocols and features before they lost a body part.
Safety features can’t protect you from a bad programmer. But they can go a long way to protect you from the inevitable fallibility of a good programmer.
That's a perfectly reasonable stance.
Some links from my notes:
https://www.phoronix.com/news/Mozilla-Interest-JPEG-XL-Rust
https://news.ycombinator.com/item?id=41443336 (discussion of the same GitHub comment as in the Phoronix site)
https://github.com/tirr-c/jxl-oxide
https://bugzilla.mozilla.org/show_bug.cgi?id=1986393 (land initial jpegxl rust code pref disabled)
In case anyone is curious, here is the benchmark I did my reading for:
https://op111.net/posts/2025/10/png-and-modern-formats-lossl...
Google citied insufficient improvements which is a rather ambiguous statement. Mozilla seems more concerned with the attack surface.
For the same reason it would be good if a future revision of PDF/A would include JPEG XL, since it doesn't really have any decent codecs for low-loss (but not losless) compression (e.g. JPEG sucks at color schematics/drawings and losless is impractically big for them). It did get JP2 but support for that is quite uncommon.
It also seems like a very CPU-centric design choice. If you implement a hardware en/decoder, you will see a stark difference in cost between one which works on 8/10 vs 32 bits. Maybe this is motivated by the intended use cases for JPEG XL? Or maybe I've missed the point of what JPEG XL is?
The crudest is downsampling the chroma channel, which makes no sense whatsoever for digital formats.