Re: [PATCH v3 4/4] kmod: throttle kmod thread limit

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri 2017-06-23 21:16:37, Luis R. Rodriguez wrote:
> On Fri, Jun 23, 2017 at 07:56:11PM +0200, Luis R. Rodriguez wrote:
> > On Fri, Jun 23, 2017 at 06:16:19PM +0200, Luis R. Rodriguez wrote:
> > > On Thu, Jun 22, 2017 at 05:19:36PM +0200, Petr Mladek wrote:
> > > > On Fri 2017-05-26 14:12:28, Luis R. Rodriguez wrote:
> > > > > --- a/kernel/kmod.c
> > > > > +++ b/kernel/kmod.c
> > > > > @@ -178,6 +175,7 @@ int __request_module(bool wait, const char *fmt, ...)
> > > > >  	ret = call_modprobe(module_name, wait ? UMH_WAIT_PROC : UMH_WAIT_EXEC);
> > > > >  
> > > > >  	atomic_inc(&kmod_concurrent_max);
> > > > > +	wake_up_all(&kmod_wq);
> > > > 
> > > > Does it make sense to wake up all waiters when we released the resource
> > > > only for one? IMHO, a simple wake_up() should be here.
> > > 
> > > Then we should wake_up() also on failure, otherwise we have the potential
> > > to not wake some in a proper time.
> > 
> > I checked and it turns out we have no error paths after we consume a kmod
> > ticket, if you will. Once we bump with atomic_dec_if_positive() we assume
> > we're moving forward with an attempt, and the only failure path is already
> > bundled with a wake at the end of the __request_module() call.
> > 
> > Then the next question would be *who* exactly gets woken up next if we just
> > use wake_up() ? The common core wake up code varies depending on use and
> > all this reminded me of the complexity we just don't need, so I have now
> > converted to use swait. swait uses list_add() if empty and then iterates
> > with list_first_entry() on wakeup, so that should get the first item added
> > to the wait list.
> > 
> > Works with me. Will run a test a before v4 is sent, but since only 2 patches
> > are modified will only send a respective update for these 2 patches.
> 
> Alright, this worked out well! Its just a tiny bit slower on test cases 0008
> and 0009 (few seconds) but that's fine, its natural due to the lack of the
> swake_up_all().

This is interesting. I guess that it was faster with swake_up_all()
because it worked as a speculative pre-wake. I mean that it takes some
time between adding a process into run-queue and really running it.
IMHO, swake_up_all() caused that __request_module() callers were
more often really running and trying to pass that
atomic_dec_if_positive(&kmod_concurrent_max) >= 0).

Best Regards,
Petr
--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux