Re: [PATCH RFC 00/15] mm: memory book keeping and lru_lock splitting

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



KAMEZAWA Hiroyuki wrote:
On Thu, 16 Feb 2012 02:57:04 +0400
Konstantin Khlebnikov<khlebnikov@xxxxxxxxxx>  wrote:

There should be no logic changes in this patchset, this is only tossing bits around.
[ This patchset is on top some memcg cleanup/rework patches,
   which I sent to linux-mm@ today/yesterday ]

Most of things in this patchset are self-descriptive, so here brief plan:


AFAIK, Hugh Dickins said he has per-zone-per-lru-lock and is testing it.
So, please CC him and Johannes, at least.


Ok


* Transmute struct lruvec into struct book. Like real book this struct will
   store set of pages for one zone. It will be working unit for reclaimer code.
[ If memcg is disabled in config there will only one book embedded into struct zone ]


Why you need to add new structure rahter than enhancing lruvec ?
"book" means a binder of pages ?


I responded to this in the reply to Hugh Dickins.


* move page-lru counters to struct book
[ this adds extra overhead in add_page_to_lru_list()/del_page_from_lru_list() for
   non-memcg case, but I believe it will be invisible, only one non-atomic add/sub
   in the same cacheline with lru list ]


This seems straightforward.

* unify inactive_list_is_low_global() and cleanup reclaimer code
* replace struct mem_cgroup_zone with single pointer to struct book

Hm, ok.

* optimize page to book translations, move it upper in the call stack,
   replace some struct zone arguments with struct book pointer.


a page->book transrater from patch 2/15

+struct book *page_book(struct page *page)
+{
+	struct mem_cgroup_per_zone *mz;
+	struct page_cgroup *pc;
+
+	if (mem_cgroup_disabled())
+		return&page_zone(page)->book;
+
+	pc = lookup_page_cgroup(page);
+	if (!PageCgroupUsed(pc))
+		return&page_zone(page)->book;
+	/* Ensure pc->mem_cgroup is visible after reading PCG_USED. */
+	smp_rmb();
+	mz = mem_cgroup_zoneinfo(pc->mem_cgroup,
+			page_to_nid(page), page_zonenum(page));
+	return&mz->book;
+}

What happens when pc->mem_cgroup is rewritten by move_account() ?
Where is the guard for lockless access of this ?

Initially this suppose to be protected with lru_lock, in final patch they are protected with rcu.
After final patch all page_book() calls are collected in [__re]lock_page_book[_irq]() functions.
They pick some book reference, lock its lru and recheck page -> book reference in loop till success.

Currently I found there only one potential problem: free_mem_cgroup_per_zone_info() in "mm: memory bookkeeping core"
maybe should call spin_unlock_wait(&zone->lru_lock), because some guy can pick page_book(pfn_to_page(pfn))
and try to isolate this page. But I not sure, how this is possible. In final patch it is totally fixed with rcu.


Thanks,
-Kame


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]