On Thu, Apr 22, 2010 at 04:40:03PM +0100, Mel Gorman wrote: > On Thu, Apr 22, 2010 at 11:18:14PM +0900, Minchan Kim wrote: > > On Thu, Apr 22, 2010 at 11:14 PM, Mel Gorman <mel@xxxxxxxxx> wrote: > > > On Thu, Apr 22, 2010 at 07:51:53PM +0900, KAMEZAWA Hiroyuki wrote: > > >> On Thu, 22 Apr 2010 19:31:06 +0900 > > >> KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote: > > >> > > >> > On Thu, 22 Apr 2010 19:13:12 +0900 > > >> > Minchan Kim <minchan.kim@xxxxxxxxx> wrote: > > >> > > > >> > > On Thu, Apr 22, 2010 at 6:46 PM, KAMEZAWA Hiroyuki > > >> > > <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote: > > >> > > > >> > > > Hmm..in my test, the case was. > > >> > > > > > >> > > > Before try_to_unmap: > > >> > > > mapcount=1, SwapCache, remap_swapcache=1 > > >> > > > After remap > > >> > > > mapcount=0, SwapCache, rc=0. > > >> > > > > > >> > > > So, I think there may be some race in rmap_walk() and vma handling or > > >> > > > anon_vma handling. migration_entry isn't found by rmap_walk. > > >> > > > > > >> > > > Hmm..it seems this kind patch will be required for debug. > > >> > > > > >> > > >> Ok, here is my patch for _fix_. But still testing... > > >> Running well at least for 30 minutes, where I can see bug in 10minutes. > > >> But this patch is too naive. please think about something better fix. > > >> > > >> == > > >> From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> > > >> > > >> At adjust_vma(), vma's start address and pgoff is updated under > > >> write lock of mmap_sem. This means the vma's rmap information > > >> update is atoimic only under read lock of mmap_sem. > > >> > > >> > > >> Even if it's not atomic, in usual case, try_to_ummap() etc... > > >> just fails to decrease mapcount to be 0. no problem. > > >> > > >> But at page migration's rmap_walk(), it requires to know all > > >> migration_entry in page tables and recover mapcount. > > >> > > >> So, this race in vma's address is critical. When rmap_walk meet > > >> the race, rmap_walk will mistakenly get -EFAULT and don't call > > >> rmap_one(). This patch adds a lock for vma's rmap information. > > >> But, this is _very slow_. > > > > > > Ok wow. That is exceptionally well-spotted. This looks like a proper bug > > > that compaction exposes as opposed to a bug that compaction introduces. > > > > > >> We need something sophisitcated, light-weight update for this.. > > >> > > > > > > In the event the VMA is backed by a file, the mapping i_mmap_lock is taken for > > > the duration of the update and is taken elsewhere where the VMA information > > > is read such as rmap_walk_file() > > > > > > In the event the VMA is anon, vma_adjust currently talks no locks and your > > > patch introduces a new one but why not use the anon_vma lock here? Am I > > > missing something that requires the new lock? > > > > rmap_walk_anon doesn't hold vma's anon_vma->lock. > > It holds page->anon_vma->lock. > > > > Of course, thank you for pointing out my error. With multiple > anon_vma's, the locking is a bit of a mess. We cannot hold spinlocks on > two vma's in the same list at the same time without potentially causing > a livelock. Incidentally, I now belatedly see why Kamezawa introduced a new lock. I assume it was to get around this mess. -- Mel Gorman Part-time Phd Student Linux Technology Center University of Limerick IBM Dublin Software Lab -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxxx For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>