Callum Lerwick wrote:
For a while now, in XFree86/Xorg, setting depth 24 refers to the actual color depth, not padding. The driver is to set the actual framebuffer depth to 24 or 32 bits as is appropriate for the hardware. AFAIK on most hardware, padding 24bit out to 32 bit performs faster.
Correct. And you mean all hardware, at least on PCI systems, since the bus is 32 bits wide and if you do 24bpp packed pixels then most of your pixel accesses will be unaligned and will need two bus cycles. Similar caveats apply to internal memory accesses triggered from the GPU to VRAM, the address calculation and pixel unpacking are significantly more expensive.
We will occasionally prefer 24bpp packed pixels, but that's mostly just for vesa(4), which is unaccelerated anyway, and where the memory savings translates to possibly higher resolutions fitting in memory.
For the cards that support 30 bit color you're typically still doing 32 bit words, with the extra two either unused or used for alpha. Most of the newer desktop-class cards support this in silicon, but X still has one or two issues that prevent us from supporting it (mostly just bugs as opposed to design flaws).
No idea how Xorg plans to handle this new fangled HDR thing...
HDR typically refers to using floating point color buffers; X as currently written just doesn't support that at all. There are various GL extensions (in various states of patent coverage, rrgh) to enable floating point color buffers for either texturing, rendering output, or both.
Which we don't support yet, of course. I'm of the opinion that requiring apps to use GL or some pleasant frontend to it in order to get HDR is perfectly okay.
- ajax -- fedora-devel-list mailing list fedora-devel-list@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/fedora-devel-list