Re: [RFC PATCH v1 1/1] leds: support to use own workqueue for each LED

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Pavel and Arseniy,

Please find my thoughts below.

On Mon, Oct 31, 2022 at 10:01:28AM +0300, Arseniy Krasnov wrote:
> On 30.10.2022 23:15, Pavel Machek wrote:
> > Hi!
> > 
> >>>> This allows to set own workqueue for each LED. This may be useful, because
> >>>> default 'system_wq' does not guarantee execution order of each work_struct,
> >>>> thus for several brightness update requests (for multiple leds), real
> >>>> brightness switch could be in random order.
> >>>
> >>> So.. what?
> >>>
> >>> Even if execution order is switched, human eye will not be able to
> >>> tell the difference.
> >> Hello,
> >>
> >> Problem arises on one of our boards where we have 14 triples of leds(each
> >> triple contains R G B). Test case is to play complex animation on all leds:
> >> smooth switch from on RGB state to another. Sometimes there are glitches in
> >> this process - divergence from expectable RGB state. We fixed this by using
> >> ordered workqueue.
> > 
> > Are there other solutions possible? Like batch and always apply _all_
> > the updates you have queued from your the worker code?
> 
> IIUC You, it is possible to do this if brightness update requests are performed using
> write to "brightness" file in /sys/class/led/. But if pattern trigger mode is used(in my
> case) - I can't synchronize these requests as they are created internally in kernel on
> timer tick.

Even more, system_wq is used when you push brightness changing requests
to sysfs node, and it could be re-ordered as well. In other words, from
queue perspective sysfs iface and trigger iface have the same behavior.

Also we can be faced with another big problem here: let's imagine you have
I2C based LED controller driver. Usually, in such drivers you're stuck
to the one driver owned mutex, which protects I2C transactions from each
other.

When you change brightness very often (let's say a hundred thousand times
per minute) you schedule many workers to system_wq. Due to system_wq is
multicore and unordered it creates many kworkers. Each kworker stucks on
the driver mutex and goes to TASK_UNINTERRUPTIBLE state. It affects Load
Average value so much. On the our device LA maximum could reach 30-35
units due to such idle kworkers.

I'm not sure custom workqueue initialization from specific HW driver is
a good solution... But it's much better than nothing.

Pavel, please share your thoughts about above problems? Maybe you have
more advanced and scalable solution idea, I would appreciate if you
could share it with us.

-- 
Thank you,
Dmitry



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux