On Mon, Mar 15, 2010 at 02:03:11PM +0100, Joerg Roedel wrote: > On Mon, Mar 15, 2010 at 05:53:13AM -0700, Muli Ben-Yehuda wrote: > > On Mon, Mar 15, 2010 at 02:25:41PM +0200, Avi Kivity wrote: > > > On 03/10/2010 11:30 PM, Luiz Capitulino wrote: > > > > > > Hi there, > > > > > > > > Our wiki page for the Summer of Code 2010 is doing quite well: > > > > > > > >http://wiki.qemu.org/Google_Summer_of_Code_2010 > > > > > > I will add another project - iommu emulation. Could be very > > > useful for doing device assignment to nested guests, which could > > > make testing a lot easier. > > > > Our experiments show that nested device assignment is pretty much > > required for I/O performance in nested scenarios. > > Really? I did a small test with virtio-blk in a nested guest (disk > read with dd, so not a real benchmark) and got a reasonable > read-performance of around 25MB/s from the disk in the l2-guest. Netperf running in L1 with direct access: ~950 Mbps throughput with 25% CPU utilization. Netperf running in L2 with virtio between L2 and L1 and direct assignment between L1 and L0: roughly the same throughput, but over 90% CPU utilization! Now extrapolate to 10GbE. Cheers, Muli -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html