Re: [PATCH v2 4/6] fs: report per-mount io stats

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Feb 28, 2022 at 6:31 PM Miklos Szeredi <miklos@xxxxxxxxxx> wrote:
>
> On Mon, 28 Feb 2022 at 17:19, Amir Goldstein <amir73il@xxxxxxxxx> wrote:
> >
> > On Mon, Feb 28, 2022 at 5:06 PM Miklos Szeredi <miklos@xxxxxxxxxx> wrote:
> > >
> > > On Mon, 28 Feb 2022 at 12:39, Amir Goldstein <amir73il@xxxxxxxxx> wrote:
> > > >
> > > > Show optional collected per-mount io stats in /proc/<pid>/mountstats
> > > > for filesystems that do not implement their own show_stats() method
> > > > and opted-in to generic per-mount stats with FS_MOUNT_STATS flag.
> > >
> > > This would allow some filesystems to report per-mount I/O stats, while
> > > leaving CIFS and NFS reporting a different set of per-sb stats.  This
> > > doesn't sound very clean.
> > >
> > > There was an effort to create saner and more efficient interfaces for
> > > per-mount info.  IMO this should be part of that effort instead of
> > > overloading the old interface.
> > >
> >
> > That's fair, but actually, I have no much need for per-mount I/O stats
> > in overlayfs/fuse use cases, so I could amend the patches to collect and
> > show per-sb I/O stats.
> >
> > Then, the generic show_stats() will not be "overloading the old interface".
> > Instead, it will be creating a common implementation to share among different
> > filesystems and using an existing vfs interface as it was intended.
> >
> > Would you be willing to accept adding per-sb I/O stats to overlayfs
> > and/or fuse via /proc/<pid>/mountstats?
>
> Yes, that would certainly be more sane.   But I'm also wondering if
> there could be some commonality with the stats provided by NFS...
>

hmm, maybe, but look:

device genesis:/shares/media mounted on /mnt/nfs with fstype nfs4 statvers=1.1
opts: rw,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,acregmin=3,acregmax=60,acdirmin=30,acdirmax=60,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.122.1,local_lock=none
age: 38
impl_id: name='',domain='',date='0,0'
caps: caps=0x3ffbffff,wtmult=512,dtsize=32768,bsize=0,namlen=255
nfsv4: bm0=0xfdffbfff,bm1=0xf9be3e,bm2=0x68800,acl=0x3,sessions,pnfs=not
configured,lease_time=90,lease_expired=0
sec: flavor=1,pseudoflavor=1
events: 0 0 0 0 0 2 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
bytes: 0 0 0 0 0 0 0 0
RPC iostats version: 1.1  p/v: 100003/4 (nfs)
xprt: tcp 0 0 2 0 38 27 27 0 27 0 2 0 0
per-op statistics
       NULL: 1 1 0 44 24 0 0 0 0
       READ: 0 0 0 0 0 0 0 0 0
       WRITE: 0 0 0 0 0 0 0 0 0
       COMMIT: 0 0 0 0 0 0 0 0 0
       OPEN: 0 0 0 0 0 0 0 0 0
       OPEN_CONFIRM: 0 0 0 0 0 0 0 0 0
       OPEN_NOATTR: 0 0 0 0 0 0 0 0 0
       OPEN_DOWNGRADE: 0 0 0 0 0 0 0 0 0
       CLOSE: 0 0 0 0 0 0 0 0 0
       ...

Those stats are pretty specific to NFS.
Most of it is RPC protocol stats, not including cached /I/O stats,
so in fact the generic collected io stats could be added to nfs_show_stats()
and they will not break the <tag>:<value> format.

I used the same output format as /proc/<pid>/io for a reason.
The iotop utility parses /proc/<pid>/io to display io per task and
also displays total io stats for block devices.

I am envisioning an extended version of iotop (-m ?) that also
displays total iostats per mount/sb, when available.

Thanks,
Amir.




[Index of Archives]     [Cgroups]     [Netdev]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux