[RFC, PATCH 0/24] VMI i386 Linux virtualization interface proposal

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>-----Original Message-----
>From: Zachary Amsden [mailto:zach@xxxxxxxxxx]
>Sent: Monday, March 13, 2006 9:58 AM
>To: Linus Torvalds; Linux Kernel Mailing List; Virtualization Mailing
>List; Xen-devel; Andrew Morton; Zach Amsden; Daniel Hecht; Daniel Arai;
>Anne Holler; Pratap Subrahmanyam; Christopher Li; Joshua LeVasseur;
>Chris Wright; Rik Van Riel; Jyothy Reddy; Jack Lo; Kip Macy; Jan
>Beulich; Ky Srinivasan; Wim Coekaerts; Leendert van Doorn; Zach Amsden
>Subject: [RFC, PATCH 0/24] VMI i386 Linux virtualization interface
>proposal

>In OLS 2005, we described the work that we have been doing in VMware
>with respect a common interface for paravirtualization of Linux. We
>shared the general vision in Rik's virtualization BoF.

>This note is an update on our further work on the Virtual Machine
>Interface, VMI.  The patches provided have been tested on 2.6.16-rc6.
>We are currently recollecting performance information for the new -rc6
>kernel, but expect our numbers to match previous results, which showed
>no impact whatsoever on macro benchmarks, and nearly neglible impact
>on microbenchmarks.

Folks,

I'm a member of the performance team at VMware & I recently did a
round of testing measuring the performance of a set of benchmarks
on the following 2 linux variants, both running natively:
 1) 2.6.16-rc6 including VMI + 64MB hole
 2) 2.6.16-rc6 not including VMI + no 64MB hole
The intent was to measure the overhead of VMI calls on native runs.
Data was collected on both p4 & opteron boxes.  The workloads used
were dbench/1client, netperf/receive+send, UP+SMP kernel compile,
lmbench, & some VMware in-house kernel microbenchmarks.  The CPU(s)
were pegged for all workloads except netperf, for which I include
CPU utilization measurements.

Attached please find a html file presenting the benchmark results
collected in terms of ratio of 1) to 2), along with the raw scores
given in brackets.  System configurations & benchmark descriptions
are given at the end of the webpage; more details are available on
request.  Also attached for reference is an html file giving the
width of the 95% confidence interval around the mean of the scores
reported for each benchmark, expressed as a percentage of the mean.

As you can see on the benchmark results webpage, the VMI-Native
& Native scores for almost all workloads match within the 95%
confidence interval.  On the P4, only 4 workloads, all lmbench
microbenchmarks (forkproc,shproc,mmap,pagefault) were outside the
interval & the overheads (2%,1%,2%,1%, respectively) are low.
The opteron microbenchmark data was a little more ragged than
the P4 in terms of variance, but it appears that only a few
lmbench microbenchmarks (forkproc,execproc,shproc) were outside
their confidence intervals and they show low overheads (4%,3%,2%,
respectively); our in-house segv & divzero seemed to show
measureable overheads as well (8%,9%).

-Regards, Anne Holler (anne@xxxxxxxxxx)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.osdl.org/pipermail/virtualization/attachments/20060320/e908cda3/score.2.6.16-rc6.html
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.osdl.org/pipermail/virtualization/attachments/20060320/e908cda3/confid.2.6.16-rc6.html

[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux