On Sat 03-08-13 17:25:01, Sha Zhengju wrote: > On Thu, Aug 1, 2013 at 10:53 PM, Michal Hocko <mhocko@xxxxxxx> wrote: > > On Thu 01-08-13 19:54:11, Sha Zhengju wrote: > >> From: Sha Zhengju <handai.szj@xxxxxxxxxx> > >> > >> Similar to dirty page, we add per cgroup writeback pages accounting. The lock > >> rule still is: > >> mem_cgroup_begin_update_page_stat() > >> modify page WRITEBACK stat > >> mem_cgroup_update_page_stat() > >> mem_cgroup_end_update_page_stat() > >> > >> There're two writeback interfaces to modify: test_{clear/set}_page_writeback(). > >> Lock order: > >> --> memcg->move_lock > >> --> mapping->tree_lock > >> > >> Signed-off-by: Sha Zhengju <handai.szj@xxxxxxxxxx> > > > > Looks good to me. Maybe I would suggest moving this patch up the stack > > so that it might get merged earlier as it is simpler than dirty pages > > accounting. Unless you insist on having the full series merged at once. > > I think the following three patches can be merged earlier: > 1/8 memcg: remove MEMCG_NR_FILE_MAPPED > 3/8 memcg: check for proper lock held in mem_cgroup_update_page_stat > 5/8 memcg: add per cgroup writeback pages accounting > > Do I need to resent them again for you or they're enough? This is a question for Andrew. I would go with them as they are. > One more word, since dirty accounting is essential to future memcg > dirty page throttling and it is not an optional feature now, I suspect > whether we can merge the following two as well and leave the overhead > optimization a separate series. :p I wouldn't hurry it. We need numbers for serious testing to see the overhead. It is still just a small step towards dirty throttling. > 4/5 memcg: add per cgroup dirty pages accounting > 8/8 memcg: Document cgroup dirty/writeback memory statistics > > The 2/8 ceph one still need more improvement, I'll separate it next version. > > > > > Acked-by: Michal Hocko <mhocko@xxxxxxx> > > Thank you. [...] -- Michal Hocko SUSE Labs -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html