Hello Linus, You are welcome ;) On 04/03/2019 11:00 AM, Linus Walleij wrote: > Hi Jean-Nicolas, > > thanks for your patch! > > You will have to resent the patch to the DMAengine list and the maintainer > (Vinod). Sure I will. Sorry for that mistake. > > Out of curiosity: what platform are you using this on? That's for the STMicro STA1295/STA1385 SoCs which make use of several ARM amba peripherals as well as nomadik gpio/pinctrl, etc ... Those machines are unfortunately not available upstream. I would like to allocate time for that but as you know, such activity require to dedicate substantial time & human resource. And we failed to find enough time for that up to now. > > On Fri, Mar 1, 2019 at 5:18 PM Jean-Nicolas Graux > <jean-nicolas.graux@xxxxxx> wrote: > >> Current way we find a waiting virtual channel for the next transfer >> at the time one physical channel becomes free is not really fair. >> >> More in details, in case there is more than one channel waiting at a time, >> by just going through the arrays of memcpy and slave channels and stopping >> as soon as state match waiting state, channels with high indexes can be >> penalized. >> >> Whenever dma engine is substantially overloaded so that we constantly >> get several channels waiting, channels with highest indexes might not >> be served for a substantial time which in the worse case, might hang >> task that wait for dma transfer to complete. >> >> This patch makes physical channel re-assignment more fair by storing >> time in jiffies when a channel is put in waiting state. Whenever a >> physical channel has to be re-assigned, this time is used to select >> channel that is waiting for the longest time. >> >> Signed-off-by: Jean-Nicolas Graux <jean-nicolas.graux@xxxxxx> > That's a neat trick, look like a good idea. Please add some > comment in the code in pl08x_phy_free() so it is clear > what is going on for people reading the code, with that: > Reviewed-by: Linus Walleij <linus.walleij@xxxxxxxxxx> I will. Thanks for the review. Regards. Jean-Nicolas. > > Yours, > Linus Walleij