[PATCH 0/6] drm: tackle byteorder issues, take two

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 24/04/17 11:26 PM, Ville Syrjälä wrote:
> On Mon, Apr 24, 2017 at 04:54:25PM +0900, Michel Dänzer wrote:
>> On 24/04/17 04:36 PM, Gerd Hoffmann wrote:
>>>
>>>>>   drm: fourcc byteorder: add DRM_FORMAT_CPU_*
>>>>>   drm: fourcc byteorder: add bigendian support to
>>>>>     drm_mode_legacy_fb_format
>>>>
>>>> As I explained in my last followup in the "[PATCH] drm: fourcc
>>>> byteorder: brings header file comments in line with reality." thread,
>>>> the mapping between GPU and CPU formats has to be provided by the
>>>> driver, it cannot be done statically.
>>>
>>> Well, the drm fourcc codes represent the cpu view (i.e. what userspace
>>> will fill the ADDFB2-created framebuffers with).
>>
>> Ville is adamant that they represent the GPU view. This needs to be
>> resolved one way or the other.
> 
> Since the byte swapping can happen either for CPU or display access
> I guess we can't just consider the GPU and display as a single entity.
> 
> We may need to consider several agents:
> 1. display
> 2. GPU
> 3. CPU
> 4. other DMA
> 
> Not sure what we can say about 4. I presume it's going to be like the
> GPU or the CPU in the sense that it might go through the CPU byte
> swapping logic or not. I'm just going to ignore it.
> 
> Let's say we have the following bytes in memory
> (in order of increasing address): A,B,C,D
> We'll assume GPU and display are LE natively. Each component will see
> the resulting 32bpp 8888 pixel as follows (msb left->lsb right):
> 
> LE CPU w/ no byte swapping:
>  display: DCBA
>  GPU: DCBA
>  CPU: DCBA
>  = everyone agrees
> 
> BE CPU w/ no byte swapping:
>  display: DCBA
>  GPU: DCBA
>  CPU: ABCD
>  = GPU and display agree
> 
> BE CPU w/ display byte swapping:
>  display: ABCD
>  GPU: DCBA
>  CPU: ABCD
>  = CPU and display agree
> 
> BE CPU w/ CPU access byte swapping:
>  display: DCBA
>  GPU: DCBA
>  CPU: DCBA
>  = everyone agrees

Beware that for this list, you're using a format definition which is
based on a packed 32-bit value. This does *not* match the current
DRM_FORMAT_*8888 definitions. E.g. in the last case, display and GPU use
the same DRM_FORMAT, but the CPU uses the "inverse" one.


-- 
Earthling Michel Dänzer               |               http://www.amd.com
Libre software enthusiast             |             Mesa and X developer


[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux