Re: [PATCH 0/6] drm: tackle byteorder issues, take two

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Apr 24, 2017 at 04:54:25PM +0900, Michel Dänzer wrote:
> On 24/04/17 04:36 PM, Gerd Hoffmann wrote:
> > 
> >>>   drm: fourcc byteorder: add DRM_FORMAT_CPU_*
> >>>   drm: fourcc byteorder: add bigendian support to
> >>>     drm_mode_legacy_fb_format
> >>
> >> As I explained in my last followup in the "[PATCH] drm: fourcc
> >> byteorder: brings header file comments in line with reality." thread,
> >> the mapping between GPU and CPU formats has to be provided by the
> >> driver, it cannot be done statically.
> > 
> > Well, the drm fourcc codes represent the cpu view (i.e. what userspace
> > will fill the ADDFB2-created framebuffers with).
> 
> Ville is adamant that they represent the GPU view. This needs to be
> resolved one way or the other.

Since the byte swapping can happen either for CPU or display access
I guess we can't just consider the GPU and display as a single entity.

We may need to consider several agents:
1. display
2. GPU
3. CPU
4. other DMA

Not sure what we can say about 4. I presume it's going to be like the
GPU or the CPU in the sense that it might go through the CPU byte
swapping logic or not. I'm just going to ignore it.

Let's say we have the following bytes in memory
(in order of increasing address): A,B,C,D
We'll assume GPU and display are LE natively. Each component will see
the resulting 32bpp 8888 pixel as follows (msb left->lsb right):

LE CPU w/ no byte swapping:
 display: DCBA
 GPU: DCBA
 CPU: DCBA
 = everyone agrees

BE CPU w/ no byte swapping:
 display: DCBA
 GPU: DCBA
 CPU: ABCD
 = GPU and display agree

BE CPU w/ display byte swapping:
 display: ABCD
 GPU: DCBA
 CPU: ABCD
 = CPU and display agree

BE CPU w/ CPU access byte swapping:
 display: DCBA
 GPU: DCBA
 CPU: DCBA
 = everyone agrees

BE CPU w/ both display and CPU byte swapping:
 display: ABCD
 GPU: DCBA
 CPU: DCBA
 = CPU and GPU agree (doesn't seem all that useful)

The different byte swapping tricks must have seemed like a great idea to
someone, but in the end they're just making our life more miserable.

> > The gpu view can certainly differ from that.  Implementing this is up
> > to the driver IMO.
> > 
> > When running on dumb framebuffers userspace doesn't need to know what
> > the gpu view is.

True. So for that we'd just need to consider whether the CPU and display
agree or disagree on the byte order. And I guess we'd have to pick from
the following choices for a BE CPU:

CPU and display agree:
 * FB is big endian, or FB is host endian (or whatever we would call it)
CPU and display disagree:
 * FB is little endian, or FB is foreign endian (or whatever)

> > 
> > When running in opengl mode there will be a hardware-specific mesa
> > driver in userspace, which will either know what the gpu view is (for
> > example because there is only one way to implement this in hardware) or
> > it can use hardware-specific ioctls to ask the kernel driver what the
> > gpu view is.
> 
> Not sure this can be hidden in the OpenGL driver. How would e.g. a
> Wayland compositor or the Xorg modesetting driver know which OpenGL
> format corresponds to a given DRM_FORMAT?

How are GL formats defined? /me needs to go read the spec again.

-- 
Ville Syrjälä
Intel OTC
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux