Re: [PATCH 2/2] drm: Redefine pixel formats

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Nov 17, 2011 at 08:52:05AM +0100, Michel Dänzer wrote:
> On Mit, 2011-11-16 at 20:42 +0200, ville.syrjala@xxxxxxxxxxxxxxx wrote: 
> > 
> > Name the formats as DRM_FORMAT_X instead of DRM_FOURCC_X. Use consistent
> > names, especially for the RGB formats. Component order and byte order are
> > now strictly specified for each format.
> > 
> > The RGB format naming follows a convention where the components names
> > and sizes are listed from left to right, matching the order within a
> > single pixel from most significant bit to least significant bit. Lower
> > case letters are used when listing the components to improve
> > readablility. I believe this convention matches the one used by pixman.
> 
> The RGB formats are all defined in the CPU native byte order. But e.g.
> pre-R600 Radeons can only scan out little endian formats. For the
> framebuffer device, we use GPU byte swapping facilities to make the
> pixels appear to the CPU in its native byte order, so these format
> definitions make sense for that. But I'm not sure they make sense for
> the KMS APIs, e.g. the userspace drivers don't use these facilities but
> handle byte swapping themselves.

Hmm. So who decides whether GPU byte swapping is needed when you eg.
mmap() some buffer?

-- 
Ville Syrjälä
Intel OTC
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/dri-devel



[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux