[PATCH] drm: fourcc byteorder: brings header file comments in line with reality.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 21.04.2017 um 13:49 schrieb Ville Syrjälä:
> On Fri, Apr 21, 2017 at 02:40:18PM +0300, Pekka Paalanen wrote:
>> On Fri, 21 Apr 2017 14:08:04 +0300
>> Ville Syrjälä <ville.syrjala at linux.intel.com> wrote:
>>
>>> On Fri, Apr 21, 2017 at 11:50:18AM +0200, Gerd Hoffmann wrote:
>>>> On Fr, 2017-04-21 at 12:25 +0300, Ville Syrjälä wrote:
>>>>> On Fri, Apr 21, 2017 at 09:58:24AM +0200, Gerd Hoffmann wrote:
>>>>>> While working on graphics support for virtual machines on ppc64 (which
>>>>>> exists in both little and big endian variants) I've figured the comments
>>>>>> for various drm fourcc formats in the header file don't match reality.
>>>>>>
>>>>>> Comments says the RGB formats are little endian, but in practice they
>>>>>> are native endian.  Look at the drm_mode_legacy_fb_format() helper.  It
>>>>>> maps -- for example -- bpp/depth 32/24 to DRM_FORMAT_XRGB8888, no matter
>>>>>> whenever the machine is little endian or big endian.  The users of this
>>>>>> function (fbdev emulation, DRM_IOCTL_MODE_ADDFB) expect the framebuffer
>>>>>> is native endian, not little endian.  Most userspace also operates on
>>>>>> native endian only.
>>>>> I'm not a fan of "native". Native to what? "CPU" or "host" is what I'd
>>>>> call it.
>>>> native == whatever the cpu is using.
>>>>
>>>> I personally find "native" more intuitive, but at the end of the day I
>>>> don't mind much.  If people prefer "host" over "native" I'll change it.
>>> "native" to me feels more like "native to the GPU" since these things
>>> really are tied to the GPU not the CPU. That's also why I went with the
>>> explicit endianness originally so that the driver could properly declare
>>> what the GPU supports.
>> Hi,
>>
>> yeah, one should really be explicit on which component's endianess does
>> "native" refer to. I just can't imagine "GPU native" to ever be an
>> option, because then userspace needs a way to discover what the
>> GPU endianess is,
> It has to know that. How else would it know how to write the bytes into
> memory in the right order for the GPU to consume, or read the stuff the
> GPU produced?
>
>> and I believe that would only deepen the swamp, not
>> drain it, because suddenly you need two enums to describe one format.
>>
>> Ville, wording aside, what do think about changing the endianess
>> definition? Is it going in the right direction?
> I don't think so, but I guess I'm in the minority.
I don't think your are in the minority. At least I would clearly say 
those formats should be in a fixed byte order and don't care about the 
CPU in the system.

What I need from the driver side is a consistent description of how the 
bytes in memory map to my hardware. What CPU is in use in the system is 
completely irrelevant for that.

Regards,
Christian.


[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux