On Wed, Jun 18, 2014 at 04:40:45PM -0400, Johannes Weiner wrote: ... > diff --git a/mm/swap.c b/mm/swap.c > index a98f48626359..3074210f245d 100644 > --- a/mm/swap.c > +++ b/mm/swap.c > @@ -62,6 +62,7 @@ static void __page_cache_release(struct page *page) > del_page_from_lru_list(page, lruvec, page_off_lru(page)); > spin_unlock_irqrestore(&zone->lru_lock, flags); > } > + mem_cgroup_uncharge(page); > } > > static void __put_single_page(struct page *page) This seems to cause a list breakage in hstate->hugepage_activelist when freeing a hugetlbfs page. For hugetlbfs, we uncharge in free_huge_page() which is called after __page_cache_release(), so I think that we don't have to uncharge here. In my testing, moving mem_cgroup_uncharge() inside if (PageLRU) block fixed the problem, so if that works for you, could you fold the change into your patch? Thanks, Naoya Horiguchi -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>