Re: [PATCH] staging/lustre: use rcu_dereference to access rcu protected current->real_parent field

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Aug 8, 2014, at 12:42 AM, Greg Kroah-Hartman wrote:

> On Fri, Aug 08, 2014 at 12:03:20AM -0400, Oleg Drokin wrote:
>> Hello!
>> 
>> On Aug 7, 2014, at 11:49 PM, Greg Kroah-Hartman wrote:
>>>> 
>>>> This is not a critical bug and in the worst case the code here may
>>>> cause miss of statistics counter increase.
>>>> This is why I think it is not worth to backport the patch at all.
>>> You are right, and if this is just for some random "statistics" file,
>>> can we just delete the whole function?
>> 
>> I hope not!
>> This is used all around the client to tally up various operations executed counts.
> Why would you do that?  Why would they care?

We would do that to provide information on the client operations performed.
They would care because they are interested in what particular clients might be doing.

>> The statistic is then used by various userspace monitoring tools.
> Why not use the in-kernel monitoring tools instead of creating your own?
> What does userspace do with that information?

We don't really control the userspace tools. People write tools to suit their needs
to monitor loads, see odd things the end users are doing or possibly for some
debugging even.
Correlating these numbers with what server sees also proves useful at times
(write combining for example).

Here's a sample of output of a recently mounted client that I poked on a bit (the lines starting with # are my comments):
# cat /proc/fs/lustre/llite/lustre-ffff88008dde27f0/stats
snapshot_time             1407473168.466102 secs.usecs
read_bytes                1 samples [bytes] 0 0 0
write_bytes               4 samples [bytes] 2 7 19
osc_write                 4 samples [bytes] 2 7 19
# The bytes counts show you minimum, maximum of writes seen and total number of bytes read-written.
# Lustre (and many other network filesystems) is very sensitive to small IO, esp. reads so it's good
# to know if you have a lot of it.
open                      6 samples [regs]
# The "regs" type just shows you how many of given type operations were performed since last statistic reset.
# Frequently that allows people to guess where does high load come from on a particular client when
# it's otherwise not obvious because not a lot of cpu is used.
# Some operations are heavier than others too.
close                     6 samples [regs]
readdir                   4 samples [regs]
setattr                   1 samples [regs]
truncate                  4 samples [regs]
getattr                   7 samples [regs]
create                    1 samples [regs]
alloc_inode               1 samples [regs]
getxattr                  8 samples [regs]
inode_permission          28 samples [regs]

As more operations types are seen the list grows.
Then there are also specific stats for readahead (data and metadata) so that interested people can make informed
decisions on the tuning there should they be unsatisfied with default settings.

I am not sure there's a similar mechanism in the kernel already that would allow us to get this sort of data easily
all in one place?

Bye,
    Oleg

_______________________________________________
devel mailing list
devel@xxxxxxxxxxxxxxxxxxxxxx
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel




[Index of Archives]     [Linux Driver Backports]     [DMA Engine]     [Linux GPIO]     [Linux SPI]     [Video for Linux]     [Linux USB Devel]     [Linux Coverity]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]
  Powered by Linux