Re: [PATCH RFC 1/8] dmaengine: Actions: get rid of bit fields from dma descriptor

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/05/2020 13:04, Manivannan Sadhasivam wrote:

Hi,

> On Mon, May 11, 2020 at 12:44:26PM +0100, André Przywara wrote:
>> On 11/05/2020 12:20, Manivannan Sadhasivam wrote:
>>
>> Hi,
>>
>>> On Mon, May 11, 2020 at 04:15:57PM +0530, Amit Tomer wrote:
>>>> Hi
>>>>
>>>> Thanks for the reply.
>>>>
>>>>> I'm in favor of getting rid of bitfields due to its not so defined way of
>>>>> working (and forgive me for using it in first place) but I don't quite like
>>>>> the current approach.
>>>>
>>>> Because , its less readable the way we are writing to those different fields ?
>>>> But this can be made more verbose by adding some comments around .
>>>>
>>>
>>> I don't like the way the hw linked lists are accessed (using an array with
>>> enums).
>>
>> But honestly this is the most sane way of doing this, see below.
>>
>>>>> Rather I'd like to have custom bitmasks (S900/S700/S500?) for writing to those
>>>>> fields.
>>>>>
>>>> I think S900 and S500 are same as pointed out by Cristian. and I didn't get by
>>>> creating custom bitmasks for it ?
>>>>
>>>> Did you mean function like:
>>>>
>>>> lli->hw[OWL_DMADESC_FLEN]= llc_hw_FLEN(len, FCNT_VALUE, FCNT_SHIFT);
>>>>
>>>
>>> I meant to keep using old struct for accessing the linked list and replacing
>>> bitfields with masks as below:
>>>
>>> struct owl_dma_lli_hw {
>>> 	...
>>>         u32     flen;
>>>         u32     fcnt;
>>> 	...
>>> };
>>
>> And is think this is the wrong way of modelling hardware defined
>> register fields. C structs have no guarantee of not introducing padding
>> in between fields, the only guarantee you get is that the first member
>> has no padding *before* it:
>> C standard, section 6.7.2.1, end of paragraph 15:
>> "There may be unnamed padding within a structure object, but not at its
>> beginning."
>>
>> Arrays in C on the contrary have very much this guarantee: The members
>> are next to each other, no padding.
>>
>> I see that structs are sometimes used in this function, but it's much
>> less common in the kernel than in other projects (U-Boot comes to mind).
>> It typically works, because common compiler *implementations* provide
>> this guarantee, but we should not rely on this.
>>
>> So:
>> Using enums for the keys provides a natural way of increasing indices,
>> without gaps. Also you get this nice and automatic size value by making
>> this the last member of the enum.
>> Arrays provide the guarantee of consecutive allocation.
>>
> 
> I agree with your concerns of using struct for defining registers. But we can
> safely live with the existing implementation since all fields are u32 and if

But why, actually? I can understand that this is done in existing code,
because this was done in the past and apparently never challenged. And
since it seems to work, at least, there is probably not much reason to
change it, just for the sake of it.
But if we need to rework this anyway, we should do the right thing. This
is especially true in the Linux kernel, which is highly critical and
privileged code and also aims to be very portable. We should take no
chances here.

Honestly I don't understand the advantage of using a struct here,
especially if you need to play some tricks (__packed__) to make it work.
So why is:
	hw->flen
so much better than
	hw[DMA_FLEN]
that it justifies to introduce dodgy code?

In think in general we should be much more careful when using C language
constructs to access hardware or hardware defined data structures, and
be it to not give people the wrong idea about this.
I think with the advance of more optimising compilers (and, somewhat
related, more out-of-order CPUs) the chance of breakage becomes much
higher here.

Cheers,
Andre.

> needed we can also add '__packed' flag to it to avoid padding for any cases.
> 
> The reason why I prefer to stick to this is, this is a hardware linked list and
> by defining it as an array and accessing the fields using enums looks awful to
> me. Other than that there is no real justification to shy away.
> 
> When you are modelling a plain register bank (which we are also doing in this
> driver), I'd prefer to use the defines directly.
> 
>> We can surely have a look at the masking problem, but this would need to
>> be runtime determined masks, which tend to become "wordy". There can be
>> simplifications, for instance I couldn't find where the frame length is
>> really limited for the S900 (it must be less than 1MB). Since the S700
>> supports *more* than that, there is no need to limit this differently.
> 
> I was just giving an example of how to handle the bitmasks for different
> SoCs if needed. So yeah if it can be avoided, feel free to drop it.
> 
> Thanks,
> Mani
> 
>>
>> Cheers,
>> Andre.
>>
>>
>>>
>>> hw->flen = len & OWL_S900_DMA_FLEN_MASK;
>>> hw->fcnt = 1 & OWL_S900_DMA_FCNT_MASK;
>>>
>>> Then you can use different masks for S700/S900 based on the compatible.
>>>
>>> Thanks,
>>> Mani
>>>
>>>> Thanks
>>>> -Amit
>>




[Index of Archives]     [Linux Kernel]     [Linux ARM (vger)]     [Linux ARM MSM]     [Linux Omap]     [Linux Arm]     [Linux Tegra]     [Fedora ARM]     [Linux for Samsung SOC]     [eCos]     [Linux PCI]     [Linux Fastboot]     [Gcc Help]     [Git]     [DCCP]     [IETF Announce]     [Security]     [Linux MIPS]     [Yosemite Campsites]

  Powered by Linux