Why JPEG XL Ignoring Bit Depth Is Genius (and Why AVIF Can't Pull It Off)

(fractionalxperience.com)

59 points | by Bogdanp 3 hours ago

7 comments

  • diffuse_l 2 hours ago
    I think the article could be better and get the point across with half the length and without the second half of it being full of ai generated list of advantages, or using that space to give some more technical information
    • evertedsphere 2 hours ago
      the article could be better if it weren't entirely "ai generated"
    • Gigachad 54 minutes ago
      Maybe we can AI summarise it back to the original prompt to save time.
  • est 2 hours ago
    > JPEG XL’s Radical Solution: Float32 + Perceptual Intent

    So 2^32 bit depth? 4 bytes seems an overkill.

    • fps-hero 2 hours ago
      Did you miss the point of the article? JPEG-XL encoding doesn't rely on quantisation to achieve its performance goals. Its a bit like how GPU shaders use floating point arithmetic internally but output quantised values for the bit depth of the screen.
      • est 1 hour ago
        > Did you miss the point of the article?

        Sorry I missed. How is the "floating point" stored in .jxl files?

        Float32 has to be serialized one way or another per pixel, no?

        • wongarsu 1 hour ago
          The cliff notes version is that JPEG and JPEG XL don't encode pixel values, they encode the discrete cosine transform (like a Fourier transform) of the 2d pixel grid. So what's really stored is more like the frequency and amplitude of change of pixels than individual pixel values, and the compression comes from the insight that some combinations of frequency and amplitude of color change are much more perceptible than others
        • jlouis 53 minutes ago
          In addition to the other comments: you can have an internal memory representation of data be Float32, but on disk, this is encoded through some form of entropy encoding. Typically, some of the earlier steps is preparation for the entropy-encoder: you make the data more amenable to entropy-encoding through rearrangement that's either fully reversible (lossless), or near-reversible (lossy).
        • tetris11 56 minutes ago
          The gradient is stored, not the points on the gradient
        • jstanley 1 hour ago
          No, JPEG is not a bitmap format.
  • fleabitdev 52 minutes ago
    Interesting approach. It doesn't even introduce an extra rounding error, because converting from 32-bit XYB to RGB should be similar to converting from 8-bit YUV to RGB.

    However, when decoding an 8-bit-quality image as 10-bit or 12-bit, won't this strategy just fill the two least significant bits with noise?

    • shiandow 36 minutes ago
      Could be noise, but finding a smooth image that rounds to a good enough approximation of the original is quite useful. If you see a video player talk about debanding it is a exactly that.

      I don't know if JPEG XL constrains solutions to be smooth.

  • kiicia 2 hours ago
    jpeg xl is fantastic, yet autocratic google wants to force inferior format
    • homebrewer 2 hours ago
      Mozilla also isn't interested in supporting it, it's not just Google. I also often see these articles that tout jpeg-xl's technical advantages, but in my subjective testing with image sizes you would typically see on the web, avif wins every single time. It not only produces fewer artifacts on medium-to-heavily compressed images, but they're also less annoying: minor detail loss and smoothing compared to jpeg-xl's blocking and ringing (in addition to detail loss; basically the same types of artifacts as with the old jpeg).

      Maybe there's a reason they're not bothering with supporting xl besides misplaced priorities or laziness.

      • OneDeuxTriSeiGo 2 hours ago
        > Mozilla also isn't interested in supporting it

        Mozilla is more than willing to adopt it. They just won't adopt the C++ implementation. They've already put into writing that they're considering adopting it when the rust implementation is production ready.

        https://github.com/mozilla/standards-positions/pull/1064

        • masklinn 2 hours ago
          You have a really strange interpretation of the word “consider”.
          • mistercow 2 hours ago
            Seems like the normal usage to me. The post above lists other criteria that have to be satisfied, beyond just being a Rust implementation. That would be the consideration.
            • masklinn 1 hour ago
              Mozilla indicates that they are willing to consider it given various prerequisite. GP translates that to being “more than willing to adopt it”. That is very much not a normal interpretation.
              • OneDeuxTriSeiGo 1 hour ago
                From the link

                > To address this concern, the team at Google has agreed to apply their subject matter expertise to build a safe, performant, compact, and compatible JPEG-XL decoder in Rust, and integrate this decoder into Firefox. If they successfully contribute an implementation that satisfies these properties and meets our normal production requirements, we would ship it.

                That is a perfectly clear position.

        • m-schuetz 2 hours ago
          Now I'm feeling a bit less bad for not using Firefox anymore. Not using it because it's C++ is <insert terms that may not be welcome on HN>
          • kouteiheika 1 hour ago
            So you think it's silly to not want to introduce new potentially remotely-exploitable CVEs in one of the most important pieces of software (the web browser) on one's computer? Or are you implying those 100k lines of multithreaded C++ code are bug-free and won't introduce any new CVEs?
            • archerx 1 hour ago
              It’s crazy how people think using Rust will magically make your code bug and vulnerability free and don’t think that the programmer more than the languages contribute to those problems…
              • kouteiheika 4 minutes ago
                It's crazy how anti-Rust people think that eliminating 70% of your security bugs[1] by construction just by using a memory-safe language (not even necessarily Rust) is somehow a bad thing or not worth doing.

                [1] - https://www.chromium.org/Home/chromium-security/memory-safet...

              • mistercow 1 hour ago
                > and don’t think that the programmer more than the languages contribute to those problems

                This sounds a lot like how I used to think about unit testing and type checking when I was younger and more naive. It also echoes the sentiments of countless craftspeople talking about safety protocols and features before they lost a body part.

                Safety features can’t protect you from a bad programmer. But they can go a long way to protect you from the inevitable fallibility of a good programmer.

              • OneDeuxTriSeiGo 1 hour ago
                It's not about being completely bug free. Safe rust is going to be reasonably hardened against exploitable decoder bugs which can be converted into RCEs. A bug in safe rust is going to be a hell of a lot harder to turn into an exploit than a bug in bog standard C++.
              • drob518 1 hour ago
                Straw-man much?
          • mistercow 1 hour ago
            Multiple severe attacks on browsers over the years have targeted image decoders. Requiring an implementation in a memory safe language seems very reasonable to me, and makes me feel better about using FF.
          • OneDeuxTriSeiGo 1 hour ago
            It's not just "C++ bad". It's "we don't want to deal with memory errors in directly user facing code that parses untrusted contents".

            That's a perfectly reasonable stance.

      • demetris 1 hour ago
        I did some reading recently, for a benchmark I was setting up, to try and understand what the situation is. It seems things have started changing in the last year or so.

        Some links from my notes:

        https://www.phoronix.com/news/Mozilla-Interest-JPEG-XL-Rust

        https://news.ycombinator.com/item?id=41443336 (discussion of the same GitHub comment as in the Phoronix site)

        https://github.com/tirr-c/jxl-oxide

        https://bugzilla.mozilla.org/show_bug.cgi?id=1986393 (land initial jpegxl rust code pref disabled)

        In case anyone is curious, here is the benchmark I did my reading for:

        https://op111.net/posts/2025/10/png-and-modern-formats-lossl...

      • gcr 12 minutes ago
        I've had exactly the opposite outcome with AVIF vs JPEG-XL. I've found that jxl outperforms AVIF quite dramatically at low bitrates.
      • Retric 2 hours ago
        JPEG-XL is optimized for the low to zero levels of compression which isn’t as commonly used on the web, but definitely fills a need.

        Google citied insufficient improvements which is a rather ambiguous statement. Mozilla seems more concerned with the attack surface.

        • formerly_proven 1 hour ago
          JPEG XL seems optimally suited for media and archival purposes and of course this is something you’d want to mostly do through webapps nowadays. Even relatively basic uses cases like Wiki Commons is basically stuck on JPEG for these purposes.

          For the same reason it would be good if a future revision of PDF/A would include JPEG XL, since it doesn't really have any decent codecs for low-loss (but not losless) compression (e.g. JPEG sucks at color schematics/drawings and losless is impractically big for them). It did get JP2 but support for that is quite uncommon.

    • AlienRobot 1 hour ago
      I wish they separated the lossless codec into "WebPNG." WebP is better than PNG, but it's too risky to use (and tell people to use) a lossless format that is lossy if you forget to use a setting.
  • colonwqbang 1 hour ago
    So they "ignore" bit depth by using 32 bits for each sample. This may be a good solution but it's not really magic. They just allocated many more bits than other codecs were willing to.

    It also seems like a very CPU-centric design choice. If you implement a hardware en/decoder, you will see a stark difference in cost between one which works on 8/10 vs 32 bits. Maybe this is motivated by the intended use cases for JPEG XL? Or maybe I've missed the point of what JPEG XL is?

  • zokier 2 hours ago
    Working with single fixed bit depth is imho different than being bit-depth agnostic. Same argument could be made about color spaces too.
  • WithinReason 1 hour ago
    Yes, this is great, but why don't we make the same argument for resolution too? I think we should!
    • shiandow 55 minutes ago
      I completely agree. Based on my limited experience with image upscaling, downscaling, and superresolution, saving video at a lower resolution is the second crudest way of reducing the file size.

      The crudest is downsampling the chroma channel, which makes no sense whatsoever for digital formats.