On Tue, Jun 01, 2010 at 09:05:38PM +0900, Takuya Yoshikawa wrote: > (2010/06/01 19:55), Marcelo Tosatti wrote: > > >>>Sorry but I have to say that mmu_lock spin_lock problem was completely > >>>out of > >>>my mind. Although I looked through the code, it seems not easy to move the > >>>set_bit_user to outside of spinlock section without breaking the > >>>semantics of > >>>its protection. > >>> > >>>So this may take some time to solve. > >>> > >>>But personally, I want to do something for x86's "vmallc() every time" > >>>problem > >>>even though moving dirty bitmaps to user space cannot be achieved soon. > >>> > >>>In that sense, do you mind if we do double buffering without moving > >>>dirty bitmaps to > >>>user space? > >> > >>So I would be happy if you give me any comments about this kind of other > >>options. > > > >What if you pin the bitmaps? > > Yes, pinning bitmaps works. The small problem is that we need to hold > the dirty_bitmap_pages[] array for every slot, the size of this array > depends on the slot length, and of course pinning itself. > > In the performance point of view, having double sized vmalloc'ed > area may be better. > > > > >The alternative to that is to move mark_page_dirty(gfn) before acquision > >of mmu_lock, in the page fault paths. The downside of that is a > >potentially (large?) number of false positives in the dirty bitmap. > > > > Interesting, but probably dangerous. > > > From my experience, though this includes my personal view, removing vmalloc > currently used by x86 is the most simple and effective change. > > So if you don't mind, I want to double the size of vmalloc'ed area for x86 > without changing other parts. > > ==> if this one more bitmap is problematic, dirty logging itself would be > in danger of failure: we need to have the same size in the timing of > switch. > > Make sense? That seems the most sensible approach. > > We can consider moving dirty bitmaps to user space later. -- To unsubscribe from this list: send the line "unsubscribe kvm-ia64" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html