On 06/29/2011 03:28 PM, Xiao Guangrong wrote:
On 06/29/2011 08:18 PM, Avi Kivity wrote: > On 06/29/2011 02:50 PM, Xiao Guangrong wrote: >> >> > >> >> > I think we should do this unconditionally. The cost of ping-ponging the shared cache line containing reader_counter will increase with large smp counts. On the other hand, zap_page is very rare, so it can be a little slower. Also, less code paths = easier to understand. >> >> > >> >> >> >> On soft mmu, zap_page is very frequently, it can cause performance regression in my test. >> > >> > Any idea what the cause of the regression is? It seems to me that simply deferring freeing shouldn't have a large impact. >> > >> >> I guess it is because the page is freed too frequently, i have done the test, it shows >> about 3219 pages is freed per second >> >> Kernbench performance comparing: >> >> the origin way: 3m27.723 >> free all shadow page in rcu context: 3m30.519 > > I don't recall seeing such a high free rate. Who is doing all this zapping? > > You may be able to find out with the function tracer + call graph. > I looked into it before, it is caused by "write flood" detected, i also noticed some pages are zapped and allocation again and again, maybe we need to improve the algorithm of detecting "write flood".
Ok. Let's drop the two paths, and put this improvement on the TODO instead. -- error compiling committee.c: too many arguments to function -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html