The comparison between CRT and digital is not as simple as “625 vs 4k”. Those analog signals were intended for a triangular subpixel shadow mask with no concept of horizontal pixel count, making them effectively into ∞×625i@50fps signals (1), compared to the digital fixed 3840×2160@60fps square pixels regardless of subpixel arrangement.
It takes about a 6x6 square pixel block to correctly represent a triangular subpixel mask, making 4K LCDs about the minimum required to properly view those 625i@50fps signals.
(1) I’m aware of optics limitations, CoC, quantum effects, and ground TV carrier band limitations, but still.
Given a high enough quality signal and fast enough switching hardware inside the TV, you’re right. In practice, real world shadow masks had a fixed resolution. It doesn’t matter how fine your control over the electron beam is if you’re still only capable of lighting colour phosphors of a limited resolution (between 500-800 depending on how crappy/good your TV was, several times that for some widescreen CRTs). You can apply some trickery to partially light the individual dots on the screen in low-light environments, but that requires the transmitter and receiver to map the input signal to the same shadow mask/aperture grille or you’ll mess up the colours. Infinite horizontal resolution only works for black and white displays.
Emulating the exact pixel arrangement on a digital display would require some absurd resolution for sure, but back in the day the input signal rarely ever had that kind of resolution in the first place. No need to set up an 8000 lines horizontal camera when the people watching your video only see 800. Very few things you can still hook up to a TV will produce more than the low SDTV resolution because the world moved to analogue. Even LaserDisc, the greatest analogue format to hit the home market, has a horizontal resolution of about 440 lines.
The comparison between CRT and digital is not as simple as “625 vs 4k”. Those analog signals were intended for a triangular subpixel shadow mask with no concept of horizontal pixel count, making them effectively into ∞×625i@50fps signals (1), compared to the digital fixed 3840×2160@60fps square pixels regardless of subpixel arrangement.
It takes about a 6x6 square pixel block to correctly represent a triangular subpixel mask, making 4K LCDs about the minimum required to properly view those 625i@50fps signals.
(1) I’m aware of optics limitations, CoC, quantum effects, and ground TV carrier band limitations, but still.
Given a high enough quality signal and fast enough switching hardware inside the TV, you’re right. In practice, real world shadow masks had a fixed resolution. It doesn’t matter how fine your control over the electron beam is if you’re still only capable of lighting colour phosphors of a limited resolution (between 500-800 depending on how crappy/good your TV was, several times that for some widescreen CRTs). You can apply some trickery to partially light the individual dots on the screen in low-light environments, but that requires the transmitter and receiver to map the input signal to the same shadow mask/aperture grille or you’ll mess up the colours. Infinite horizontal resolution only works for black and white displays.
Emulating the exact pixel arrangement on a digital display would require some absurd resolution for sure, but back in the day the input signal rarely ever had that kind of resolution in the first place. No need to set up an 8000 lines horizontal camera when the people watching your video only see 800. Very few things you can still hook up to a TV will produce more than the low SDTV resolution because the world moved to analogue. Even LaserDisc, the greatest analogue format to hit the home market, has a horizontal resolution of about 440 lines.