>> > >> > Results: >> > >> > Netperf, 1 vm: >> > The polling patch improved throughput by ~33% (1516 MB/sec -> 2046 MB/sec). >> > Number of exits/sec decreased 6x. >> > The same improvement was shown when I tested with 3 vms running netperf >> > (4086 MB/sec -> 5545 MB/sec). >> > >> > filebench, 1 vm: >> > ops/sec improved by 13% with the polling patch. Number of exits >> was reduced by >> > 31%. >> > The same experiment with 3 vms running filebench showed similar numbers. >> > >> > Signed-off-by: Razya Ladelsky <razya@xxxxxxxxxx> >> >> Gave it a quick try on s390/kvm. As expected it makes no difference >> for big streaming workload like iperf. >> uperf with a 1-1 round robin got indeed faster by about 30%. >> The high CPU consumption is something that bothers me though, as >> virtualized systems tend to be full. >> >> > >Thanks for confirming the results! >The best way to use this patch would be along with a shared vhost thread >for multiple >devices/vms, as described in: >http://domino.research.ibm.com/library/cyberdig.nsf/1e4115aea78b6e7c85256b360066f0d4/479e3578ed05bfac85257b4200427735!OpenDocument >This work assumes having a dedicated I/O core where the vhost thread >serves multiple vms, which >makes the high cpu utilization less of a concern. > Hi, Razya, Shirley I am going to test the combination of "several (depends on total number of cpu on host, e.g., total_number * 1/3) vhost threads server all VMs" and "vhost: add polling mode", now I get the patch "http://thread.gmane.org/gmane.comp.emulators.kvm.devel/88682/focus=88723" posted by Shirley, any update to this patch? And, I want to make a bit change on this patch, create total_cpu_number * 1/N(N={3,4}) vhost threads instead of per-cpu vhost thread to server all VMs, any ideas? Thanks, Zhang Haoyu > > >> > +static int poll_start_rate = 0; >> > +module_param(poll_start_rate, int, S_IRUGO|S_IWUSR); >> > +MODULE_PARM_DESC(poll_start_rate, "Start continuous polling of >> virtqueue when rate of events is at least this number per jiffy. If >> 0, never start polling."); >> > + >> > +static int poll_stop_idle = 3*HZ; /* 3 seconds */ >> > +module_param(poll_stop_idle, int, S_IRUGO|S_IWUSR); >> > +MODULE_PARM_DESC(poll_stop_idle, "Stop continuous polling of >> virtqueue after this many jiffies of no work."); >> >> This seems ridicoudly high. Even one jiffie is an eternity, so >> setting it to 1 as a default would reduce the CPU overhead for most cases. >> If we dont have a packet in one millisecond, we can surely go back >> to the kick approach, I think. >> >> Christian >> > >Good point, will reduce it and recheck. >Thank you, >Razya -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html