>Before HD, almost all video was non-square pixels
Correct. This came from the ITU-R BT.601 standard, one of the first digital video standards authors of which chose to define digital video as a sampled analog signal. Analog video never had a concept of pixels and operated on lines instead. The rate at which you could sample it could be arbitrary, and affected only the horizontal resolution. The rate chosen by BT.601 was 13.5 MHz, which resulted in a 10/11 pixel aspect ratio for 4:3 NTSC video and 59/54 for 4:3 PAL.
>SD channels on cable TV systems are 528x480
I'm not actually sure about America, but here in Europe most digital cable and satellite SDTV is delivered as 720x576i 4:2:0 MPEG-2 Part 2. There are some outliers that use 544x576i, however.
I also have a few decrypted samples from the Hot Bird 13E, public DVB-T and T2 transmitters and Vectra DVB-C from Poland, but for that I'd have to dig through my backups.
Even with modern digital codecs and streaming, there's usually chroma subsampling[1], so the color channels may have non-square "pixels" even if overall pixels are nominally square. I most often see 4:2:0 subsampling, which still has square pixels, but at half resolution in each dimension. However 4:2:2 is also fairly common, and it has half resolution in only one dimension, so the pixels are 2:1. You'd have trouble getting a video decoding library to mess this up though.
Displaying content from a DVD on a panel with square pixels (LCD, plasma, etc.) required stretching or omitting some pixels. For widescreen content you'd need to stretch that 720x480 to 848x480, and for 4:3 content you'd need to stretch it to 720x540, or shrink it to 640x480, depending on the resolution of the panel.
CRTs of course had no fixed horizontal resolution.
Edit: I just realized I forgot about PAL DVDs which were 720x576. But the same principle applies.
Just look at Japanese television… most channels get broadcast at 1440x1080i for 16:9 content instead the full 1920x1080i (to save bandwidth for other things, I assume), so it's still very common with HD too.
It may also be due to legacy reasons. Japan was a pioneer in adopting HD TV years before the rest of the world, but early HD cameras and video formats like HDCAM and HDV only recorded 1080i at 1440x1080. If their whole video processing chain is set up for 1440x1080, they’d likely have to replace a lot of equipment to switch over to full 1920x1080i.
I'm confused... what does DVD, SD or any arbitrary frame size have to do with the shape of pixels themselves? Is that not only relevant to the display itself and not the file format/container/codec?
My understanding is that televisions would mostly have square/rectangular pixels, while computer monitors often had circular pixels.
Or are you perhaps referring to pixel aspect ratios instead?
I'm not 100% sure I understand your question, but in order to display a DVD correctly, you need to either display the pixels stored in the video stream wider than they are tall (for widescreen), or narrower than they are tall (for 4:3). Displaying those pixels 1:1 on a display with square pixels would never be correct for DVD video.
A square pixel has a 1:1 aspect ratio (width is the same as the height). Any other rectangular pixel with widths different than their heights would be considered "non-square".
F.ex. in case of a "4:3 720x480" frame… a quick test: 720/4=180 and 480/3=160… 180 vs. 160… different results… which means the pixels for this frame are not square, just rectangular. Alternatively 720/480 vs. 4/3 works too, of course.
Again I think you're talking about pixel aspect ratios instead, and not physically non-square pixels, which would be display-dependent. OP only said "square pixels" but then only talked about aspect ratios, hence my confusion.
OP quoted “non-square pixels” from the article, which is talking about pixel aspect ratios, i.e., width vs height. The implicit alternative to square in this context is rectangular, we’re not talking about circular or other non-rectangular shapes. Whenever the display aspect ratio is different than the storage or format aspect ratio, that means the pixels have to be non-square. For example, if a DVD image is stored at 720x480 and displayed at 4:3, the pixel aspect ratio would have to be 8:9 to make it work out: (720x8)/(480x9)==4/3. I believe with NTSC, DVDs drop a few pixels off the sides and use 704x480 and a pixel aspect ratio of 10:11.
Some modern films are still filmed with anamorphic lenses because the director / DP like that, and so we in the VFX industry have to deal with plate footage that way, and so have to deal with non-square pixels in the software handling the images (to de-squash the image, even though the digital camera sensor pixels that recorded the image from the lens were square) in order to display correctly (i.e. so that round circular things still look round, and are not squashed).
Even to the degree that full CG element renders (i.e. rendered to EXR with a pathtracing renderer) should really use anisotropic pixel filter widths to look correct.
Yes, and when working with footage shot with anamorphic lenses one will have to render the footage as non-square pixels, mapped to the square pixels of our screens, to view it at its intended aspect ratio. This process is done either at the beginning (conforming the footage before sending to editorial / VFX) or end (conforming to square pixels as a final step) of the post-production workflow depending on the show.
No the author is highlighting the fact that the aspect ratio a video is stored in doesn’t always match the aspect ratio a video is displayed in. So simply calculating the aspect ratio based on the number of horizontal and vertical pixels gives you the storage ratio, but doesn’t always result in the correct display ratio.
Yes I think they are conflating square pixels with square pixel aspect ratios.
If a video file only stores a singular color value for each pixel, why does it care what shape the pixel is in when it's displayed? It would be filled in with the single color value regardless.
This reminded me of retina screenshots on mac — selecting a 100×100 area can produce a 200×200 file. Different cause but same idea - the stored pixels don’t always match what you see on screen.
This is indeed similar in the effects, but completely different in the cause to the phenomenon referenced in the article (device pixel ratio vs pixel aspect ratio).
What you're referring to stems from an assumption made a long time ago by Microsoft, later adopted as a de facto standard by most computer software. The assumption was that the pixel density of every display, unless otherwise specified, was 96 pixels per inch [1].
The value stuck and started being taken for granted, while the pixel density of displays started growing much beyond that—a move mostly popularized by Apple's Retina. A solution was needed to allow new software to take advantage of the increased detail provided by high-density displays while still accommodating legacy software written exclusively for 96 PPI. This resulted in the decoupling of "logical" pixels from "physical" pixels, with the logical resolution being most commonly defined as "what the resolution of the display would be given its physical size and a PPI of 96" [2], and the physical resolution representing the real amount of pixels. The 100x100 and 200x200 values in your example are respectively the logical and physical resolutions of your screenshot.
Different software vendors refer to these "logical" pixels differently, but the most names you're going to encounter are points (Apple), density-independent pixels ("DPs", Google), and device-independent pixels ("DIPs", Microsoft). The value of 96, while the most common, is also not a standard per se. Android uses 160 PPI as its base, Apple has for a long time used 72.
I might be misunderstanding what you're saying, but I'm pretty sure print and web were already more popular than anything Apple did. The need to be aware of output size and scale pixels was not at all uncommon by the time retina displays came out.
From what I recall only Microsoft had problems with this, and specifically on Windows. You might be right about software that was exclusive to desktop Windows. I don't remember having scaling issues even on other Microsoft products such as Windows Mobile.
Print was always density-independent. This didn't translate into high-density displays, however. The web, at least how I remember it, for the longest time was "best viewed in Internet Explorer at 800x600", and later 1024x768, until vector-based Flash came along :)
If my memory serves, it was Apple that popularized high pixel density in displays with the iPhone 4. They weren't the first to use such a display [1], but certainly the ones to start a chain reaction that resulted in phones adopting crazy resolutions all the way up to 4K.
It's the desktop software that mostly had problems scaling. I'm not sure about Windows Mobile. Windows Phone and UWP have adopted an Android-like model.
Before HD, almost all video was non-square pixels. DVD is 720x480. SD channels on cable TV systems are 528x480.
Correct. This came from the ITU-R BT.601 standard, one of the first digital video standards authors of which chose to define digital video as a sampled analog signal. Analog video never had a concept of pixels and operated on lines instead. The rate at which you could sample it could be arbitrary, and affected only the horizontal resolution. The rate chosen by BT.601 was 13.5 MHz, which resulted in a 10/11 pixel aspect ratio for 4:3 NTSC video and 59/54 for 4:3 PAL.
>SD channels on cable TV systems are 528x480
I'm not actually sure about America, but here in Europe most digital cable and satellite SDTV is delivered as 720x576i 4:2:0 MPEG-2 Part 2. There are some outliers that use 544x576i, however.
It still looks surprisingly good, considering.
https://www.w6rz.net/528x480.ts
https://www.w6rz.net/528x480sp.ts
Doing my part and sending you some samples of UPC cable from the Czech Republic :)
720x576i 16:9: https://0x0.st/P-QU.ts
720x576i 4:3: https://0x0.st/P-Q0.ts
That one weird 544x576i channel I found: https://0x0.st/P-QG.ts
I also have a few decrypted samples from the Hot Bird 13E, public DVB-T and T2 transmitters and Vectra DVB-C from Poland, but for that I'd have to dig through my backups.
[1]: https://en.wikipedia.org/wiki/Chroma_subsampling
CRTs of course had no fixed horizontal resolution.
Edit: I just realized I forgot about PAL DVDs which were 720x576. But the same principle applies.
My understanding is that televisions would mostly have square/rectangular pixels, while computer monitors often had circular pixels.
Or are you perhaps referring to pixel aspect ratios instead?
F.ex. in case of a "4:3 720x480" frame… a quick test: 720/4=180 and 480/3=160… 180 vs. 160… different results… which means the pixels for this frame are not square, just rectangular. Alternatively 720/480 vs. 4/3 works too, of course.
Some modern films are still filmed with anamorphic lenses because the director / DP like that, and so we in the VFX industry have to deal with plate footage that way, and so have to deal with non-square pixels in the software handling the images (to de-squash the image, even though the digital camera sensor pixels that recorded the image from the lens were square) in order to display correctly (i.e. so that round circular things still look round, and are not squashed).
Even to the degree that full CG element renders (i.e. rendered to EXR with a pathtracing renderer) should really use anisotropic pixel filter widths to look correct.
If a video file only stores a singular color value for each pixel, why does it care what shape the pixel is in when it's displayed? It would be filled in with the single color value regardless.
https://alvyray.com/Memos/CG/Microsoft/6_pixel.pdf
What you're referring to stems from an assumption made a long time ago by Microsoft, later adopted as a de facto standard by most computer software. The assumption was that the pixel density of every display, unless otherwise specified, was 96 pixels per inch [1].
The value stuck and started being taken for granted, while the pixel density of displays started growing much beyond that—a move mostly popularized by Apple's Retina. A solution was needed to allow new software to take advantage of the increased detail provided by high-density displays while still accommodating legacy software written exclusively for 96 PPI. This resulted in the decoupling of "logical" pixels from "physical" pixels, with the logical resolution being most commonly defined as "what the resolution of the display would be given its physical size and a PPI of 96" [2], and the physical resolution representing the real amount of pixels. The 100x100 and 200x200 values in your example are respectively the logical and physical resolutions of your screenshot.
Different software vendors refer to these "logical" pixels differently, but the most names you're going to encounter are points (Apple), density-independent pixels ("DPs", Google), and device-independent pixels ("DIPs", Microsoft). The value of 96, while the most common, is also not a standard per se. Android uses 160 PPI as its base, Apple has for a long time used 72.
[1]: https://learn.microsoft.com/en-us/archive/blogs/fontblog/whe...
[2]: https://developer.mozilla.org/en-US/docs/Web/API/Window/devi...
From what I recall only Microsoft had problems with this, and specifically on Windows. You might be right about software that was exclusive to desktop Windows. I don't remember having scaling issues even on other Microsoft products such as Windows Mobile.
If my memory serves, it was Apple that popularized high pixel density in displays with the iPhone 4. They weren't the first to use such a display [1], but certainly the ones to start a chain reaction that resulted in phones adopting crazy resolutions all the way up to 4K.
It's the desktop software that mostly had problems scaling. I'm not sure about Windows Mobile. Windows Phone and UWP have adopted an Android-like model.
[1]: https://en.wikipedia.org/wiki/Retina_display#Competitors