Re: libvirt and accessing remote systems

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jan 25, 2007 at 04:56:23AM -0500, Daniel Veillard wrote:
> On Wed, Jan 24, 2007 at 11:48:47PM +0000, Daniel P. Berrange wrote:
> > On Wed, Jan 24, 2007 at 02:17:31PM +0000, Richard W.M. Jones wrote:
> > >  * Another proposal was to make all libvirt calls remote
> > >    (http://people.redhat.com/berrange/libvirt/libvirt-arch-remote-3.png)
> > >    but I don't think this is a going concern because (1) it requires
> > >    a daemon always be run, which is another installation problem and
> > >    another chance for sysadmins to give up, and (2) the perception will
> > >    be that this is slow, whether or not that is actually true.
> > 
> > I'd never compared performance of direct hypercalls vs libvirt_proxy
> > before, so I did a little test. The most commonly called method is
> > virt-manager is virDomainGetInfo  for fetching current status of a
> > running domain - we call that once a second per guest.
> > 
> > So I wrote a simple program in C which calls virDomainGetInfo 100,000
> > times for 3 active guest VMs. I ran the test under a couple of different
> > libvirt backends. The results were:
> > 
> >  1. As root, direct hypercalls                ->  1.4 seconds
> >  2. As non-root, hypercalls via libvirt_proxy -> 9 seconds
> >  3. As non-root, via XenD                     -> 45 minutes [1]
> > 
> > So although it is x10 slower than hypercalls, the libvirt_proxy is 
> > actaully pretty damn fast - 9 seconds for 300,000 calls.
> 
>   Interesting figures, I has expected the proxy inter-process communication
> to slow things down more, I guess it works well because scheduling follows
> exactly the message passing so there is little latency in the RPC, was that
> on a uniprocessor machine ?

It was a dual core machine, so there wasn't so much process-contention as
you'd get on UP.

> 
> > [1] It didn't actually finish after 45 seconds. I just got bored of waiting.
> 
> s/seconds/minutes/ I guess, and you checked CPU was at 100% not a bad deadlock
> somewhere, right ;-) ?

Of course I meant minutes here :-) At least 60% of the CPU time was the 
usual XenStoreD doing stupid amounts of I/O problem.

Dan.
-- 
|=- Red Hat, Engineering, Emerging Technologies, Boston.  +1 978 392 2496 -=|
|=-           Perl modules: http://search.cpan.org/~danberr/              -=|
|=-               Projects: http://freshmeat.net/~danielpb/               -=|
|=-  GnuPG: 7D3B9505   F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505  -=| 


[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]