Re: [PATCH 4/5] Introcude VIR_CONNECT_GET_ALL_DOMAINS_STATS_BEST_EFFORT

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jun 15, 2018 at 09:19:44AM +0200, Michal Privoznik wrote:
> On 06/14/2018 05:35 PM, Daniel P. Berrangé wrote:
> > On Thu, Jun 14, 2018 at 11:07:43AM +0200, Michal Privoznik wrote:
> >> On 06/13/2018 05:34 PM, John Ferlan wrote:
> >>>
> >>> $SUBJ: "Introduce" and "NO_WAIT"
> >>>
> >>>
> >>> On 06/07/2018 07:59 AM, Michal Privoznik wrote:
> >>>> https://bugzilla.redhat.com/show_bug.cgi?id=1552092
> >>>>
> >>>> If there's a long running job it might cause us to wait 30
> >>>> seconds before we give up acquiring job. This may cause trouble
> >>>
> >>> s/job/the job/
> >>>
> >>> s/may cause trouble/is problematic/
> >>>
> >>>> to interactive applications that fetch stats repeatedly every few
> >>>> seconds.
> >>>>
> >>>> Solution is to introduce
> >>>
> >>> The solution is...
> >>>
> >>>> VIR_CONNECT_GET_ALL_DOMAINS_STATS_BEST_EFFORT flag which tries to
> >>>> acquire job but does not wait if acquiring failed.
> >>>>
> >>>> Signed-off-by: Michal Privoznik <mprivozn@xxxxxxxxxx>
> >>>> ---
> >>>>  include/libvirt/libvirt-domain.h |  1 +
> >>>>  src/libvirt-domain.c             | 10 ++++++++++
> >>>>  src/qemu/qemu_driver.c           | 15 ++++++++++++---
> >>>>  3 files changed, 23 insertions(+), 3 deletions(-)
> >>>>
> >>>> diff --git a/include/libvirt/libvirt-domain.h b/include/libvirt/libvirt-domain.h
> >>>> index da773b76cb..1a1d34620d 100644
> >>>> --- a/include/libvirt/libvirt-domain.h
> >>>> +++ b/include/libvirt/libvirt-domain.h
> >>>> @@ -2055,6 +2055,7 @@ typedef enum {
> >>>>      VIR_CONNECT_GET_ALL_DOMAINS_STATS_SHUTOFF = VIR_CONNECT_LIST_DOMAINS_SHUTOFF,
> >>>>      VIR_CONNECT_GET_ALL_DOMAINS_STATS_OTHER = VIR_CONNECT_LIST_DOMAINS_OTHER,
> >>>>  
> >>>> +    VIR_CONNECT_GET_ALL_DOMAINS_STATS_BEST_EFFORT = 1 << 29, /* ignore stalled domains */
> >>>
> >>> "stalled"?  How about "don't wait on other jobs"
> >>
> >> Well, my hidden idea was also that we can "misuse" this flag to not wait
> >> on other places too. For instance, if we find out (somehow) that a
> >> domain is in D state, we would consider it stale even without any job
> >> running on it. Okay, we have no way of detecting if qemu is in D state
> >> right now, but you get my point. If we don't lock this flag down to just
> >> domain jobs (that not all drivers have btw), we can use it more widely.
> > 
> > I would suggest we call it  "NOWAIT" with explanation that we will only
> > report statistics that can be obtained immediately without any blocking,
> > whatever may be the cause.
> 
> Okay, works for me. I'll post v2 shortly.
> 
> > 
> > 
> > On a tangent, I think this problem really calls for a significantly
> > different design approach, medium term.
> > 
> > The bulk stats query APIs were a good step forward on what we had
> > before where users must call many libvirt APIs, but it is still
> > not very scalable. With huge numbers of guests, we're still having
> > to serialize stats query calls into 1000's of QEMU processes.
> > 
> > I think we must work with QEMU to define a better interface, taking
> > advantage of fact we're colocated on the same host. ie we tell QEMU
> > we we want stats exported in memory page <blah>, and QEMU will keep
> > that updated at all times.
> > 
> > When libvirt needs the info it can then just read it straight out of
> > the shared memory page, no blocking on any jobs, no QMP serialization,
> > etc.
> > 
> > For that matter, we can do a similar thing in libvirt API too. We can
> > export a shared memory region for applications to use, which we keep
> > updated on some regular interval that app requests. They can then always
> > access updated stats without calling libvirt APIs at all.
> 
> This is clever idea. But qemu is not the only source of stats we gather.
> We also fetch some data from CGroups, /proc, ovs bridge, etc. So libvirt
> would need to add its own stats for client to see. This means there will
> be a function that updates the shared mem every so often (as client
> tells us via some new API?). The same goes for qemu impl. Now imagine
> two clients wanting two different refresh rates with GCD 1 :-)

Yes, the shared mem areas used by qemu would have to be separate from
the one exposed by libvirt. Each QEMU would expose a separate area to
libvirt, and libvirt would aggregate stats from every QEMU, and also
the other places like cgroups, and then put them into its own single
shared mem area.

I would imagine we could be restrictive with refresh rate, eg require
5 seconds, or a multiple of 5 seconds, so we can easily reconcile
multiple client rates.

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|

--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list




[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]

  Powered by Linux