Re: [PATCH 2/2] rpc: Fix segmentation fault when no worker pool is available

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jun 18, 2018 at 12:38:06PM +0200, Marc Hartmayer wrote:
> On Mon, Jun 18, 2018 at 09:47 AM +0200, Erik Skultety <eskultet@xxxxxxxxxx> wrote:
> > On Fri, Jun 15, 2018 at 03:31:34PM +0200, Marc Hartmayer wrote:
> >> On Fri, Jun 15, 2018 at 01:39 PM +0200, Marc Hartmayer <mhartmay@xxxxxxxxxxxxx> wrote:
> >> > If srv->workers is a NULL pointer, as it is the case if there are no
> >> > workers, then don't try to dereference it.
> >> >
> >> > Signed-off-by: Marc Hartmayer <mhartmay@xxxxxxxxxxxxx>
> >> > Reviewed-by: Boris Fiuczynski <fiuczy@xxxxxxxxxxxxx>
> >> > ---
> >> >  src/rpc/virnetserver.c | 22 +++++++++++++++-------
> >> >  1 file changed, 15 insertions(+), 7 deletions(-)
> >> >
> >> > diff --git a/src/rpc/virnetserver.c b/src/rpc/virnetserver.c
> >> > index 5ae809e372..be6f610880 100644
> >> > --- a/src/rpc/virnetserver.c
> >> > +++ b/src/rpc/virnetserver.c
> >> > @@ -933,13 +933,21 @@ virNetServerGetThreadPoolParameters(virNetServerPtr srv,
> >> >                                      size_t *jobQueueDepth)
> >> >  {
> >> >      virObjectLock(srv);
> >> > -
> >> > -    *minWorkers = virThreadPoolGetMinWorkers(srv->workers);
> >> > -    *maxWorkers = virThreadPoolGetMaxWorkers(srv->workers);
> >> > -    *freeWorkers = virThreadPoolGetFreeWorkers(srv->workers);
> >> > -    *nWorkers = virThreadPoolGetCurrentWorkers(srv->workers);
> >> > -    *nPrioWorkers = virThreadPoolGetPriorityWorkers(srv->workers);
> >> > -    *jobQueueDepth = virThreadPoolGetJobQueueDepth(srv->workers);
> >> > +    if (srv->workers) {
> >> > +        *minWorkers = virThreadPoolGetMinWorkers(srv->workers);
> >> > +        *maxWorkers = virThreadPoolGetMaxWorkers(srv->workers);
> >> > +        *freeWorkers = virThreadPoolGetFreeWorkers(srv->workers);
> >> > +        *nWorkers = virThreadPoolGetCurrentWorkers(srv->workers);
> >> > +        *nPrioWorkers = virThreadPoolGetPriorityWorkers(srv->workers);
> >> > +        *jobQueueDepth = virThreadPoolGetJobQueueDepth(srv->workers);
> >> > +    } else {
> >> > +        *minWorkers = 0;
> >> > +        *maxWorkers = 0;
> >> > +        *freeWorkers = 0;
> >> > +        *nWorkers = 0;
> >> > +        *nPrioWorkers = 0;
> >> > +        *jobQueueDepth = 0;
> >> > +    }
> >> >
> >> >      virObjectUnlock(srv);
> >> >      return 0;
> >> > --
> >> > 2.13.6
> >>
> >> After thinking again it probably makes more sense (and the code more
> >> beautiful) to initialize the worker pool even for maxworker=0 (within
> >
> > I don't understand why should we do that.
>
> Because right now there are several functionalities broken. Segmentation
> faults in virNetServerGet/SetThreadPoolParameters, it’s not possible to
> start with maxworkers=0 and then change it at runtime via

Naturally, since no workers means noone to process the request, that is IMHO
the expected behaviour.

> virNetServerSetThreadPoolParameters. Sure we can fix all these problems,
> but then we’ve to introduce new code paths. Why can’t we just call
> virThreadPoolNew(0, 0 , …)? This will only initialize the memory
> structure of the pool and _no_ threads. The only change we’ve to do then

I got the picture from the previous message, it's just I wasn't convinced.

> is to change the condition 'if (srv->workers)' of
> 'virNetServerDispatchNewMessage' to something like 'if
> (virThreadPoolGetMaxWorkers(srv->workers) > 0)'.
>
> > We don't even initialize it for
> > libvirtd server - the implications are clear - you don't have workers, you
> > don't get to process a job.
>
> Yep. I don’t want to change that behavior either.
>
> >
> >> virNetServerNew) (=> we'll have to adapt virNetServerDispatchNewMessage
> >> as well). BTW, there is also a segmentation fault in
> >> virThreadPoolSetParameters… And currently it’s not possible to start
> >> with maxworkers set to 0 and then increase it via
> >
> > Do I assume correctly that virNetServerDispatchNewMessage would allocate a new
> > worker if there was a request to process but the threadpool was empty?
>
> Do you mean with my proposed changes? Or without?
>
> > If so, I
> > don't see a reason to do that, why would anyone want to run with no
> > workers?
>
> Commit dff6d809fb6c851244ea07afd07f580d7320cc7a which actually
> introduces this feature says:
>
> “Allow RPC server to run single threaded
>
> Refactor the RPC server dispatcher code so that if 'max_workers==0' the
> entire server will run single threaded. This is useful for use cases
> where there will only ever be 1 client connected which serializes its
> requests”.

Having said all of the above, even though I'm surprised we have something like
^this, because to me max_workers == 0 means something different (mind the
risks), we should remain consistent, so thanks for pointing that out.
Besides, the more I think about it, I guess it makes sense to be able to both
expand and shrink the worker pool as the user pleases, even with setting the
worker pool size to 0, it's true that setting it wrong one time (be it for
whatever reason) resulting in essentially locking yourself up.

Now, as long as noone has been using admin_max_workers == 0 in the config to
turn the admin interface off (/o\ why?), then I think we might want to make
a clear statement here that max_workers must be/will always be at least 1 to
make sure there's always a single thread running and able to process admin
requests, because without making this clear it might look confusing that
GetThreadPoolParameters would return max_workers=1 even though the caller tried
to set it to 0. With that said, would you need to perform any changes to the RPC
layer even if you adjusted SetThreadPoolParameters to always let at least one
worker in the pool?

Erik

--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list




[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]

  Powered by Linux