On Fri, Aug 8, 2014 at 1:32 PM, Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> wrote: > On Fri, Aug 08, 2014 at 01:06:15AM -0400, Oleg Drokin wrote: >> >> On Aug 8, 2014, at 12:42 AM, Greg Kroah-Hartman wrote: >> >> > On Fri, Aug 08, 2014 at 12:03:20AM -0400, Oleg Drokin wrote: >> >> Hello! >> >> >> >> On Aug 7, 2014, at 11:49 PM, Greg Kroah-Hartman wrote: >> >>>> >> >>>> This is not a critical bug and in the worst case the code here may >> >>>> cause miss of statistics counter increase. >> >>>> This is why I think it is not worth to backport the patch at all. >> >>> You are right, and if this is just for some random "statistics" file, >> >>> can we just delete the whole function? >> >> >> >> I hope not! >> >> This is used all around the client to tally up various operations executed counts. >> > Why would you do that? Why would they care? >> >> We would do that to provide information on the client operations performed. >> They would care because they are interested in what particular clients might be doing. >> >> >> The statistic is then used by various userspace monitoring tools. >> > Why not use the in-kernel monitoring tools instead of creating your own? >> > What does userspace do with that information? >> >> We don't really control the userspace tools. People write tools to suit their needs >> to monitor loads, see odd things the end users are doing or possibly for some >> debugging even. >> Correlating these numbers with what server sees also proves useful at times >> (write combining for example). >> >> Here's a sample of output of a recently mounted client that I poked on a bit (the lines starting with # are my comments): >> # cat /proc/fs/lustre/llite/lustre-ffff88008dde27f0/stats >> snapshot_time 1407473168.466102 secs.usecs >> read_bytes 1 samples [bytes] 0 0 0 >> write_bytes 4 samples [bytes] 2 7 19 >> osc_write 4 samples [bytes] 2 7 19 >> # The bytes counts show you minimum, maximum of writes seen and total number of bytes read-written. >> # Lustre (and many other network filesystems) is very sensitive to small IO, esp. reads so it's good >> # to know if you have a lot of it. >> open 6 samples [regs] >> # The "regs" type just shows you how many of given type operations were performed since last statistic reset. >> # Frequently that allows people to guess where does high load come from on a particular client when >> # it's otherwise not obvious because not a lot of cpu is used. >> # Some operations are heavier than others too. >> close 6 samples [regs] >> readdir 4 samples [regs] >> setattr 1 samples [regs] >> truncate 4 samples [regs] >> getattr 7 samples [regs] >> create 1 samples [regs] >> alloc_inode 1 samples [regs] >> getxattr 8 samples [regs] >> inode_permission 28 samples [regs] >> >> As more operations types are seen the list grows. >> Then there are also specific stats for readahead (data and metadata) so that interested people can make informed >> decisions on the tuning there should they be unsatisfied with default settings. >> >> I am not sure there's a similar mechanism in the kernel already that >> would allow us to get this sort of data easily all in one place? > > perf should show you this, if not, please add the functionality there. > A filesystem is not the place to have performance monitoring code, this > needs to be removed before it can be moved out of staging. Please work > with the trace/perf developers on this if there is something lacking > there. > nfs and nfsd track rpc ops statistics and export them via /proc/self/mountstats, e.g., device 192.168.214.141:/d9691564-432b-11e2-8e5d-8b7acf882df3 mounted on /mnt/pnfsd with fstype nfs4 statvers=1.1 opts: rw,vers=4.1,rsize=262144,wsize=262144,namlen=255,acregmin=3,acregmax=60,acdirmin=30,acdirmax=60,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.214.128,local_lock=none age: 15426 impl_id: name='',domain='',date='0,0' caps: caps=0x3ffff,wtmult=512,dtsize=32768,bsize=0,namlen=255 nfsv4: bm0=0xfdffbfff,bm1=0x40f9be3e,bm2=0x803,acl=0x3,sessions sec: flavor=1,pseudoflavor=1 events: 82474 12159573 9527 109202 7574 10119 16289648 3634869 10938 108551 2084272 182492 13646 7700 52594 60832 8829 48985 0 6564 1459053 66 0 0 0 289315 376376 bytes: 11526471786 9942294760 3280371712 3278274560 14578366831 11710126268 2782400 2084272 RPC iostats version: 1.0 p/v: 100003/4 (nfs) xprt: tcp 859 0 2 0 12 408031 407999 29 2169734 0 32 2496 310753 per-op statistics NULL: 0 0 0 0 0 0 0 0 READ: 289327 289326 0 35877640 14615129136 63609 1800007 1893161 WRITE: 376352 376360 0 11759732976 51184768 6698277 2246445 8978314 COMMIT: 3076 3076 0 381424 393728 1827 15450 17329 OPEN: 24926 24926 0 7329252 8968144 1373312 1794621 3169378 <snip...> Why Lustre cannot do similar things? Thanks, Tao > thanks, > > greg k-h > dG > >> >> Bye, >> Oleg _______________________________________________ devel mailing list devel@xxxxxxxxxxxxxxxxxxxxxx http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel