Re: [PATCH 3/4] writeback: nr_dirtied and nr_entered_writeback in /proc/vmstat

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Aug 21, 2010 at 07:51:38AM +0800, Michael Rubin wrote:
> On Fri, Aug 20, 2010 at 3:08 AM, Wu Fengguang <fengguang.wu@xxxxxxxxx> wrote:
> > How about the names nr_dirty_accumulated and nr_writeback_accumulated?
> > It seems more consistent, for both the interface and code (see below).
> > I'm not really sure though.
> 
> Those names don't seem to right to me.
> I admit I like "nr_dirtied" and "nr_cleaned" that seems most
> understood. These numbers also get very big pretty fast so I don't
> think it's hard to infer.

That's fine. I like "nr_cleaned".

> >> In order to track the "cleaned" and "dirtied" counts we added two
> >> vm_stat_items.  Per memory node stats have been added also. So we can
> >> see per node granularity:
> >>
> >>    # cat /sys/devices/system/node/node20/writebackstat
> >>    Node 20 pages_writeback: 0 times
> >>    Node 20 pages_dirtied: 0 times
> >
> > I'd prefer the name "vmstat" over "writebackstat", and propose to
> > migrate items from /proc/zoneinfo over time. zoneinfo is a terrible
> > interface for scripting.
> 
> I like vmstat also. I can do that.

Thank you.

> > Also, are there meaningful usage of per-node writeback stats?
> 
> For us yes. We use fake numa nodes to implement cgroup memory isolation.
> This allows us to see what the writeback behaviour is like per cgroup.

That's sure convenient for you, for now. But it's special use case.

I wonder if you'll still stick to the fake NUMA scenario two years
later -- when memcg grows powerful enough. What do we do then? "Hey
let's rip these counters, their major consumer has dumped them.."

For per-job nr_dirtied, I suspect the per-process write_bytes and
cancelled_write_bytes in /proc/self/io will serve you well.

For per-job nr_cleaned, I suspect the per-zone nr_writeback will be
sufficient for debug purposes (in despite of being a bit different).

> > The numbers are naturally per-bdi ones instead. But if we plan to
> > expose them for each bdi, this patch will need to be implemented
> > vastly differently.
> 
> Currently I have no plans to do that.

Peter? :)

Thanks,
Fengguang
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux