On Wed, 2010-06-16 at 11:38 +0300, Avi Kivity wrote: > On 06/15/2010 04:55 PM, Dave Hansen wrote: > > These seem to boot and run fine. I'm running about 40 VMs at > > once, while doing "echo 3> /proc/sys/vm/drop_caches", and > > killing/restarting VMs constantly. > > > > Will drop_caches actually shrink the kvm caches too? If so we probably > need to add that to autotest since it's a really good stress test for > the mmu. I'm completely sure. I crashed my machines several times this way during testing. > > Seems to be relatively stable, and seems to keep the numbers > > of kvm_mmu_page_header objects down. > > > > That's no necessarily a good thing, those things are expensive to > recreate. Of course, when we do need to reclaim them, that should be > efficient. Oh, I meant that I didn't break the shrinker completely. > We also do a very bad job of selecting which page to reclaim. We need > to start using the accessed bit on sptes that point to shadow page > tables, and then look those up and reclaim unreferenced pages sooner. > With shadow paging there can be tons of unsync pages that are basically > unused and can be reclaimed at no cost to future runtime. Sounds like a good next step. -- Dave -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html