Re: [GIT PULL] workqueue fixes for v4.3-rc5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Oct 14, 2015 at 12:38 PM, Tejun Heo <tj@xxxxxxxxxx> wrote:
>
> Doesn't seem that way.  This is from 597d0275736d^ - right before
> TIMER_NOT_PINNED is introduced.  add_timer() eventually calls into
> __mod_timer().
>
>                 if (likely(base->running_timer != timer)) {
>                         /* See the comment in lock_timer_base() */
>                         timer_set_base(timer, NULL);
>                         spin_unlock(&base->lock);
>                         base = new_base;
>                         spin_lock(&base->lock);
>                         timer_set_base(timer, base);
>
> It looks like the timers for work items will be reliably queued on the
> local CPU.

.. unless they are running on another cpu at the time, yes.

Also, the new base is not necessarily the current cpu base, although I
think the exceptions to that are pretty rare (ie you have to enable
timer migration etc)

Which I guess might not actually happen with workqueue timers due to
the extra workqueue locking thing, but I'm not sure. It's going to be
very unlikely, regardless, I agree.

> Heh, I don't think much in this area is intended.  It's mostly all
> historical accidents and failures to get things cleaned up in time.

No argument there.

> So, the following two things bother me about this.
>
> * Given that this is the first reported case of breakage, I don't
>   think this is gonna cause lots of criticial issues; however, the
>   only thing this indicates is that there simply hasn't been enough
>   cases where timers actualy migrate.  If we end up migrating timers
>   more actively in the future, it's possible that we'll see more
>   breakages which will likely be subtler.

I agree that that's a real concern.

At the same time, some of the same issues that are pushing people to
move timers around (put idle cores to deeper sleeps etc) would also
argue for moving delayed work around to other cpus if possible.

So I agree that there is a push to make timer cpu targets more dynamic
in a way we historically didn't really have. At the same time, I think
the same forces that want to move timers around would actually likely
want to move delayed work around too...

> * This makes queue_delayed_work() behave differently from queue_work()
>   and when I checked years ago the local queueing guarantee was
>   definitely being depended upon by some users.

Yes. But the delayed work really is different. By definition, we know
that the current cpu is busy and active _right_now_, and so keeping
work on that cpu isn't obviously wrong.

But it's *not* obviously right to schedule something on that
particular cpu a few seconds from now, when it might be happily asleep
and there might be better cpus to bother..

> I do want to get rid of the local queueing guarnatee for all work
> items.  That said, I don't think this is the right way to do it.

Hmm. I guess that for being past rc5, taking your patch is the safe
thing. I really don't like it very much, though.

                    Linus

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]