Re: [PATCH 2/2] rpc: Fix segmentation fault when no worker pool is available

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jun 19, 2018 at 05:03 PM +0200, "Daniel P. Berrangé" <berrange@xxxxxxxxxx> wrote:
> On Tue, Jun 19, 2018 at 03:55:09PM +0100, Daniel P. Berrangé wrote:
>> On Tue, Jun 19, 2018 at 04:51:13PM +0200, Erik Skultety wrote:
>> > On Mon, Jun 18, 2018 at 12:38:06PM +0200, Marc Hartmayer wrote:
>> > > On Mon, Jun 18, 2018 at 09:47 AM +0200, Erik Skultety <eskultet@xxxxxxxxxx> wrote:
>> > > > On Fri, Jun 15, 2018 at 03:31:34PM +0200, Marc Hartmayer wrote:
>> > > >> On Fri, Jun 15, 2018 at 01:39 PM +0200, Marc Hartmayer <mhartmay@xxxxxxxxxxxxx> wrote:
>> > > >> > If srv->workers is a NULL pointer, as it is the case if there are no
>> > > >> > workers, then don't try to dereference it.
>> > > >> >
>> > > >> > Signed-off-by: Marc Hartmayer <mhartmay@xxxxxxxxxxxxx>
>> > > >> > Reviewed-by: Boris Fiuczynski <fiuczy@xxxxxxxxxxxxx>
>> > > >> > ---
>> > > >> >  src/rpc/virnetserver.c | 22 +++++++++++++++-------
>> > > >> >  1 file changed, 15 insertions(+), 7 deletions(-)
>> > > >> >
>> > > >> > diff --git a/src/rpc/virnetserver.c b/src/rpc/virnetserver.c
>> > > >> > index 5ae809e372..be6f610880 100644
>> > > >> > --- a/src/rpc/virnetserver.c
>> > > >> > +++ b/src/rpc/virnetserver.c
>> > > >> > @@ -933,13 +933,21 @@ virNetServerGetThreadPoolParameters(virNetServerPtr srv,
>> > > >> >                                      size_t *jobQueueDepth)
>> > > >> >  {
>> > > >> >      virObjectLock(srv);
>> > > >> > -
>> > > >> > -    *minWorkers = virThreadPoolGetMinWorkers(srv->workers);
>> > > >> > -    *maxWorkers = virThreadPoolGetMaxWorkers(srv->workers);
>> > > >> > -    *freeWorkers = virThreadPoolGetFreeWorkers(srv->workers);
>> > > >> > -    *nWorkers = virThreadPoolGetCurrentWorkers(srv->workers);
>> > > >> > -    *nPrioWorkers = virThreadPoolGetPriorityWorkers(srv->workers);
>> > > >> > -    *jobQueueDepth = virThreadPoolGetJobQueueDepth(srv->workers);
>> > > >> > +    if (srv->workers) {
>> > > >> > +        *minWorkers = virThreadPoolGetMinWorkers(srv->workers);
>> > > >> > +        *maxWorkers = virThreadPoolGetMaxWorkers(srv->workers);
>> > > >> > +        *freeWorkers = virThreadPoolGetFreeWorkers(srv->workers);
>> > > >> > +        *nWorkers = virThreadPoolGetCurrentWorkers(srv->workers);
>> > > >> > +        *nPrioWorkers = virThreadPoolGetPriorityWorkers(srv->workers);
>> > > >> > +        *jobQueueDepth = virThreadPoolGetJobQueueDepth(srv->workers);
>> > > >> > +    } else {
>> > > >> > +        *minWorkers = 0;
>> > > >> > +        *maxWorkers = 0;
>> > > >> > +        *freeWorkers = 0;
>> > > >> > +        *nWorkers = 0;
>> > > >> > +        *nPrioWorkers = 0;
>> > > >> > +        *jobQueueDepth = 0;
>> > > >> > +    }
>> > > >> >
>> > > >> >      virObjectUnlock(srv);
>> > > >> >      return 0;
>> > > >> > --
>> > > >> > 2.13.6
>> > > >>
>> > > >> After thinking again it probably makes more sense (and the code more
>> > > >> beautiful) to initialize the worker pool even for maxworker=0 (within
>> > > >
>> > > > I don't understand why should we do that.
>> > >
>> > > Because right now there are several functionalities broken. Segmentation
>> > > faults in virNetServerGet/SetThreadPoolParameters, it’s not possible to
>> > > start with maxworkers=0 and then change it at runtime via
>> >
>> > Naturally, since no workers means noone to process the request, that is IMHO
>> > the expected behaviour.
>>
>> Yes, a daemon should either run with no workers, or should run with
>> 1 or more workers. It is not value to change between these two modes.
>>
>> So if there's a codepath that lets you change from 0 -> 1 workers,
>> or the reverse, we should make sure that reports an error.
>>
>> Essentially workers=0 is only intended for things like virtlockd
>> or virlogd which don't need to be multithreaded, or indeed must
>> *never* be multithreaded to avoid tickling kernel bugs like
>> virtlockd did in the past.
>
> Also note that workers=0 will cause libvirtd to deadlock, because
> the QEMU driver (and others too) assume that they run in a seperate
> thread from the main event loop.

Yep, I know. What is your preferred way to avoid this?

>
> Regards,
> Daniel
> --
> |: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org         -o-            https://fstop138.berrange.com :|
> |: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|
>
--
Beste Grüße / Kind regards
   Marc Hartmayer

IBM Deutschland Research & Development GmbH
Vorsitzende des Aufsichtsrats: Martina Koederitz
Geschäftsführung: Dirk Wittkopp
Sitz der Gesellschaft: Böblingen
Registergericht: Amtsgericht Stuttgart, HRB 243294


--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list




[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]

  Powered by Linux