On Thu, 30 Oct 2003, Romain MULLER wrote: >Date: Thu, 30 Oct 2003 14:37:56 +0100 >From: Romain MULLER <tidus3012@xxxxxxxxxx> >To: XFree86@xxxxxxxxxxx >Reply-To: xfree86@xxxxxxxxxxx >Content-Type: text/plain; > charset="us-ascii" >Subject: XFree (4.3.0) & 32b graphics ... > >I'm quite a noob on Linux (and so with XFree86), and I'm asking >myself if it is possible to set color depth up to 32bpp or not >... In fact, all my JPEG pictures that look pretty good on >Windows (32b color depth) are ugly under Linux 24b color depth >... And as far as I know, the 24b color depth of my XFree may be >in cause ... Any idea ? Maybe is it possible to force 32b color >depth ? Sorry to break it to you, but you're wrong. Very wrong. Here is the longer story, from a text file I wrote up some time ago in response to confusion about colour depth: There are two separate concepts when it comes to a "pixel". The first concept, "colour depth", is how many bits in the pixel contain information that determines the actual colour of the pixel. The second concept, is how many bits of video memory a single pixel consumes. This is often refered to as framebuffer bits per pixel, or "fbbpp". Microsoft Windows, in an effort to hide the underlying complexity of how the graphics hardware *really* works, lets the user choose from 8bit, 16bit, and 32bit. What that is choosing is NOT the colour depth, it is choosing the framebuffer pixel size. The size of a pixel in video memory MUST be on a byte boundary, such as 8, 16, 24, or 32 bits, or rather 1/2/3/4 bytes per pixel in video memory. The colour depth of a pixel is how many of those bits contain colour information. Depending on the specific layout chosen, not all of those bits are actually used. When you are using 8 bit depth, the size of a pixel in video memroy is also 8 bits. One 8bit register write to video memory writes one pixel. Likewise a 16bit register write writes out 2 pixels. There is no wasted bits in between pixels. When you are using 16 bit depth, one pixel is 16 bits in size, and all of those bits are also used to indicate colour information. No bits are wasted. However, 24 bit colour depth is where things get a bit more complicated. 24bit colour depth means that one pixel uses 24bits to indicate it's colour, 8 bits for red, 8 for green, 8 for blue. However, 24bits isn't CPU or hardware friendly because processors don't have 24bit registers, but rather have 8, 16, 32, and 64bit registers. You can not write a single 24bit pixel into video memory using a single instruction and only touch 24bits of video memory. So if you are using 24bit colour depth with 24bit sized pixels in video memory, the software (or hardware for that matter) must split the pixel into 2 writes and write out 16 bits and then 8 bits separately, or vice versa. That is a serious performance hit, and while it was how things were done 12-15 or more years ago quite often in order to save money on memory on video hardware, it isn't very useful in modern systems of the last 10 years or so. Modern graphics performance more or less demands that video memory be written to 8, 16, or 32 bits at a time, as that is the native size of the registers on the CPU. Since as I said above there is no 24 bit register on a modern CPU, the only way to write out 24bits is in a single 32bit write to video memory. However, if a pixel is 24bits in size in video memory, and you write out 32bits of data in order to draw a pixel into video memory, then you just wrote out the pixel you wanted, and also just overwrote 8 bits of the next pixel. You have no way to write out a single pixel without damaging the next one, or without splitting the pixel into an 8bit and 16 bit write, other than resorting to other techniques. Either way, it is very slow, plus most video accelerators don't implement acceleration for 24bit framebuffer bpp. The solution, is that the video hardware uses pixels that are 32bits in size for colour depth 24. That way, a pixel of 24bit colour depth, can be written into video memory using a single 32bit write, and 8 bits are wasted[1]. For the last 10 or more years, when the layperson refers to either "24bit color" or "32bit color", they mean the *exact* same thing, which in proper terminology is "24bit colour depth using 32bits of video memory per pixel". Microsoft Windows, and also XFree86 versions up to 3.3.6 incorrectly called this "32bit" color. XFree86 4.x properly calls this "24bit colour depth", and all of the XFree86 4.x video drivers use 32bit sized pixels when using 24bit colour depth, in exactly the same way that Microsoft Windows does, the only difference being that Microsoft calls this "32bit color". There is absolutely no difference other than the terminology being used. There is no such thing as "32bit colour depth" implemented in video hardware[2], and people using that term, or thinking that they're using 32bit colour are confused and don't understand how video hardware works, although if they just read what I wrote above, they are on the path to recovery. ;o) Summary: People confuse computer terminology when refering to colour depth and what that specifically means, and how it is implemented in hardware and how software programs the hardware. What is commonly refered to as 32bit color is incorrect usage of terminology and very misleading. There is no such thing as 32bit color.[3] There are pixels that are 32bits in size, of which only 24 bits contain colour depth information and 8 wasted bits[1]. Both Windows, and XFree86 use this by default regardless of the fact that both name them using different terminology. So if you're using 24bit colour depth in XFree86, and also in Windows (refered to in Windows as 32bit color), then you are using the *exact* same thing period. There is absolutely *no* difference in the amount of color. Footnotes: [1] In 32bit per pixel framebuffer configurations, the 8 bits of pixel space that are not part of the colour depth information are not always wasted. This extra 8 bits is sometimes used to hold alpha-channel information used for translucency. For the intents of discussing the above however it is easier to just treat the non-colour-depth bits as being wasted. I comment on this only because often someone will try to point out these 8 bits are used for alpha on occasion, which is true, but is irrelevant to the oversimplified discussion of colour depth and framebuffer pixel sizes. [2] Some hardware _does_ implement 30 bit colour depth, generally as 10 bits per RGB component and an optional 2 bit alpha channel (mostly useless). This is also highly specialized and not supported nor useful on a modern desktop, not to mention how horribly slow it would probably be with each component being split across byte boundaries. [3] "32bit colour depth" could at least in theory exist, however to my knowledge, no video hardware implements a 32bit depth mode in which all 32bits are used for colour depth information. If there is actually such hardware, it is quite irrelevant to modern desktop systems, and would instead be custom hardware/software used in the medical, scientific and perhaps movie making industries. Hopefully this text file clarifies any confusion you might have had concerning 24 vs. 32 bits with respect ot colour depth between XFree86 and Windows. Take care, TTYL -- Mike A. Harris _______________________________________________ XFree86 mailing list XFree86@xxxxxxxxxxx http://XFree86.Org/mailman/listinfo/xfree86