On Tue, Aug 19, 2014 at 11:36:31AM +0300, Razya Ladelsky wrote: > > That was just one example. There many other possibilities. Either > > actually make the systems load all host CPUs equally, or divide > > throughput by host CPU. > > > > The polling patch adds this capability to vhost, reducing costly exit > overhead when the vm is loaded. > > In order to load the vm I ran netperf with msg size of 256: > > Without polling: 2480 Mbits/sec, utilization: vm - 100% vhost - 64% > With Polling: 4160 Mbits/sec, utilization: vm - 100% vhost - 100% > > Therefore, throughput/cpu without polling is 15.1, and 20.8 with polling. > Can you please present results in a form that makes it possible to see the effect on various configurations and workloads? Here's one example where this was done: https://lkml.org/lkml/2014/8/14/495 You really should also provide data about your host configuration (missing in the above link). > My intention was to load vhost as close as possible to 100% utilization > without polling, in order to compare it to the polling utilization case > (where vhost is always 100%). > The best use case, of course, would be when the shared vhost thread work > (TBD) is integrated and then vhost will actually be using its polling > cycles to handle requests of multiple devices (even from multiple vms). > > Thanks, > Razya -- MST -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html