On Thu, Nov 17, 2011 at 03:00:17PM +0100, Michel Dänzer wrote: > On Don, 2011-11-17 at 15:06 +0200, Ville Syrjälä wrote: > > On Thu, Nov 17, 2011 at 08:52:05AM +0100, Michel Dänzer wrote: > > > On Mit, 2011-11-16 at 20:42 +0200, ville.syrjala@xxxxxxxxxxxxxxx wrote: > > > > > > > > Name the formats as DRM_FORMAT_X instead of DRM_FOURCC_X. Use consistent > > > > names, especially for the RGB formats. Component order and byte order are > > > > now strictly specified for each format. > > > > > > > > The RGB format naming follows a convention where the components names > > > > and sizes are listed from left to right, matching the order within a > > > > single pixel from most significant bit to least significant bit. Lower > > > > case letters are used when listing the components to improve > > > > readablility. I believe this convention matches the one used by pixman. > > > > > > The RGB formats are all defined in the CPU native byte order. But e.g. > > > pre-R600 Radeons can only scan out little endian formats. For the > > > framebuffer device, we use GPU byte swapping facilities to make the > > > pixels appear to the CPU in its native byte order, so these format > > > definitions make sense for that. But I'm not sure they make sense for > > > the KMS APIs, e.g. the userspace drivers don't use these facilities but > > > handle byte swapping themselves. > > > > Hmm. So who decides whether GPU byte swapping is needed when you eg. > > mmap() some buffer? > > The userspace drivers. Hmm. OK. So I guess we should define the formats as little endian then. Supposing we also need big endian formats in the future, we could just define an extra flag like so: #define DRM_FORMAT_BIG_ENDIAN (1<<31) which would be ORed with the fourcc to get a big endian version of the format. Otherwise we have to duplicate the formats for big endian as well. -- Ville Syrjälä Intel OTC _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/dri-devel