memcg: save 20% of per-page memcg memory overhead

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This patch series removes the direct page pointer from struct
page_cgroup, which saves 20% of per-page memcg memory overhead (Fedora
and Ubuntu enable memcg per default, openSUSE apparently too).

The node id or section number is encoded in the remaining free bits of
pc->flags which allows calculating the corresponding page without the
extra pointer.

I ran, what I think is, a worst-case microbenchmark that just cats a
large sparse file to /dev/null, because it means that walking the LRU
list on behalf of per-cgroup reclaim and looking up pages from
page_cgroups is happening constantly and at a high rate.  But it made
no measurable difference.  A profile reported a 0.11% share of the new
lookup_cgroup_page() function in this benchmark.

	Hannes

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxxx  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]