This series is onto linux-next + memcg-add-mem_cgroup_replace_page_cache-to-fix-lru-issue.patch The 1st purpose of this patch is reduce overheads of mem_cgroup_add/del_lru. They uses some atomic ops. After this patch, lru handling routine will be == struct lruvec *mem_cgroup_lru_add_list(struct zone *zone, struct page *page, enum lru_list lru) { struct mem_cgroup_per_zone *mz; struct mem_cgroup *memcg; struct page_cgroup *pc; if (mem_cgroup_disabled()) return &zone->lruvec; pc = lookup_page_cgroup(page); memcg = pc->mem_cgroup; VM_BUG_ON(!memcg); mz = page_cgroup_zoneinfo(memcg, page); /* compound_order() is stabilized through lru_lock */ MEM_CGROUP_ZSTAT(mz, lru) += 1 << compound_order(page); return &mz->lruvec; } == simple and no atomic ops. Because of Johannes works in linux-next, this can be archived by very straightforward way. Thanks, -Kame -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>