Re: [PATCH 6/10] mm/memcg: take care over pc->mem_cgroup

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 21 Feb 2012, Konstantin Khlebnikov wrote:
> > 
> > Be particularly careful in compaction's isolate_migratepages() and
> > vmscan's lumpy handling in isolate_lru_pages(): those approach the
> > page by its physical location, and so can encounter pages which
> > would not be found by any logical lookup.  For those cases we have
> > to change __isolate_lru_page() slightly: it must leave ClearPageLRU
> > to the caller, because compaction and lumpy cannot safely interfere
> > with a page until they have first isolated it and then locked lruvec.
> 
> Yeah, this is most complicated part.

Yes, I found myself leaving this patch until last when commenting.

And was not entirely convinced by what I then said of move_account().
Indeed, I wondered if it might be improved and simplified by taking
lruvec locks itself, in the manner that commit_charge lrucare ends
up doing.

> I found one race here, see below.

Thank you!

> > @@ -386,10 +364,24 @@ static isolate_migrate_t isolate_migrate
> >                          continue;
> > 
> >                  page_relock_lruvec(page,&lruvec);
> 
> Here race with mem_cgroup_move_account() we hold lock for old lruvec,
> while move_account() recharge page and put page back into other lruvec.
> Thus we see PageLRU(), but below we isolate page from wrong lruvec.

I expect you'll prove right on that, but I'm going to think about it
more later.

> 
> In my patch-set this is fixed with __wait_lru_unlock() [ spin_unlock_wait() ]
> in mem_cgroup_move_account()

Right now, after finishing mail, I want to concentrate on getting your
series working under my swapping load.

It's possible that I screwed up rediffing it to my slightly later
base (though the only parts that appeared to need fixing up were as
expected, near update_isolated_counts and move_active_pages_to_lru);
but if so I'd expect that to show up differently.

At present, although it runs fine with cgroup_disable=memory, with memcg
two machines very soon hit that BUG at include/linux/mm_inline.h:41!
when the lru_size or pages_count wraps around; on another it hit that
precisely when I stopped the test.

In all cases it's in release_pages from free_pages_and_swap_cache from
tlb_flush_mmu from tlb_finish_mmu from exit_mmap from mmput from exit_mm
from do_exit (but different processes exiting).

Hugh

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]