On Wed, Oct 09, 2013 at 10:47:10PM -0300, Marcelo Tosatti wrote: > > >> Gleb has a idea that uses RCU_DESTORY to protect the shadow page table > > >> and encodes the page-level into the spte (since we need to check if the spte > > >> is the last-spte. ). How about this? > > > > > > Pointer please? Why is DESTROY_SLAB_RCU any safer than call_rcu with > > > regards to limitation? (maybe it is). > > > > For my experience, freeing shadow page and allocing shadow page are balanced, > > we can check it by (make -j12 on a guest with 4 vcpus and): > > > > # echo > trace > > [root@eric-desktop tracing]# cat trace > ~/log | sleep 3 > > [root@eric-desktop tracing]# cat ~/log | grep new | wc -l > > 10816 > > [root@eric-desktop tracing]# cat ~/log | grep prepare | wc -l > > 10656 > > [root@eric-desktop tracing]# cat set_event > > kvmmmu:kvm_mmu_get_page > > kvmmmu:kvm_mmu_prepare_zap_page > > > > alloc VS. free = 10816 : 10656 > > > > So that, mostly all allocing and freeing are done in the slab's > > cache and the slab frees shdadow pages very slowly, there is no rcu issue. > > A more detailed test case would be: > > - cpu0-vcpu0 releasing pages as fast as possible > - cpu1 executing get_dirty_log > > Think of a very large guest. > The number of shadow pages allocated from slab will be bounded by n_max_mmu_pages, and, in addition, page released to slab is immediately available for allocation, no need to wait for grace period. RCU comes into play only when slab is shrunk, which should be almost never. If SLAB_DESTROY_BY_RCU slab does not rate limit how it frees its pages this is for slab to fix, not for its users. -- Gleb. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html