Shorten mem_cgroup_reclaim_iter.last_dead_count from unsigned long to int: it's assigned from an int and compared with an int, and adjacent to an unsigned int: so there's no point to it being unsigned long, which wasted 104 bytes in every mem_cgroup_per_zone. Signed-off-by: Hugh Dickins <hughd@xxxxxxxxxx> --- Putting this one first as it should be nicely uncontroversial. I'm assuming much too late for v3.13, so all 3 diffed against mmotm. mm/memcontrol.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- mmotm/mm/memcontrol.c 2014-01-10 18:25:02.236448954 -0800 +++ linux/mm/memcontrol.c 2014-01-12 22:21:10.700570471 -0800 @@ -149,7 +149,7 @@ struct mem_cgroup_reclaim_iter { * matches memcg->dead_count of the hierarchy root group. */ struct mem_cgroup *last_visited; - unsigned long last_dead_count; + int last_dead_count; /* scan generation, increased every round-trip */ unsigned int generation; -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>