On Tue, 27 Sep 2011 17:49:00 -0700 Michel Lespinasse <walken@xxxxxxxxxx> wrote: > Extend memory cgroup documentation do describe the optional idle page > tracking features, and add the corresponding configuration option. > > > Signed-off-by: Michel Lespinasse <walken@xxxxxxxxxx> > +* idle_2_clean, idle_2_dirty_file, idle_2_dirty_swap: same definitions as > + above, but for pages that have been untouched for at least two scan cycles. > +* these fields repeat up to idle_240_clean, idle_240_dirty_file and > + idle_240_dirty_swap, allowing one to observe idle pages over a variety > + of idle interval lengths. Note that the accounting is cumulative: > + pages counted as idle for a given interval length are also counted > + as idle for smaller interval lengths. I'm sorry if you've answered already. Why 240 ? and above means we have idle_xxx_clean/dirty/ xxx is 'seq 2 240' ? Isn't it messy ? Anyway, idle_1_clean etc should be provided. Hmm, I don't like the idea very much... IIUC, there is no kernel interface which shows histgram rather than load_avg[]. Is there any other interface and what histgram is provided ? And why histgram by kernel is required ? BTW, can't this information be exported by /proc/<pid>/smaps or somewhere ? I guess per-proc will be wanted finally. Hm, do you use params other than idle_clean for your scheduling ? Thanks, -Kame -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>