Re: [PATCH 2/2 v2] writeback: Add writeback stats for pages written

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jan:

On Mon, Aug 15, 2011 at 11:40 AM, Jan Kara <jack@xxxxxxx> wrote:
> On Mon 15-08-11 10:16:38, Curt Wohlgemuth wrote:
>> On Mon, Aug 15, 2011 at 6:48 AM, Wu Fengguang <fengguang.wu@xxxxxxxxx> wrote:
>> > Curt,
>> >
>> > Some thoughts about the interface..before dipping into the code.
>> >
>> > On Sat, Aug 13, 2011 at 06:47:25AM +0800, Curt Wohlgemuth wrote:
>> >> Add a new file, /proc/writeback/stats, which displays
>> >
>> > That's creating a new top directory in /proc. Do you have plans for
>> > adding more files under it?
>>
>> Good question.  We have several files under /proc/writeback in our
>> kernels that we created at various times, some of which are probably
>> no longer useful, but others seem to be.  For example:
>>   - congestion: prints # of calls, # of jiffies slept in
>> congestion_wait() / io_schedule_timeout() from various call points
>>   - threshold_dirty : prints the current global FG threshold
>>   - threshold_bg : prints the current global BG threshold
>>   - pages_cleaned : prints the # pages sent to writeback -- same as
>> 'nr_written' in /proc/vmstat (ours was earlier :-( )
>>   - pages_dirtied (same as nr_dirtied in /proc/vmstat)
>>   - prop_vm_XXX : print shift/events from vm_completions and vm_dirties
>>
>> I'm not sure right now if global FG/BG thresholds appear anywhere in a
>> 3.1 kernel; if so, the two threshold files above are superfluous.  So
>> are the pages_cleaned/dirtied.  The prop_vm files have not proven
>> useful to me.  I think the congestion file has a lot of value,
>> especially in an IO-less throttling world...
>  /sys/kernel/debug/bdi/<dev>/stats has BdiDirtyThresh, DirtyThresh, and
> BackgroundThresh. So we should already expose all you have in the threshold
> files.

Ah, right, I knew that and overlooked it.  I get confused looking at
lots of kernel versions and patches at the same time :-) .

> Regarding congestion_wait() statistics - do I get right that the numbers
> gathered actually depend on the number of threads using the congested
> device? They are something like
>  \sum_{over threads} time_waited_for_bdi
> How do you interpret the resulting numbers then?

I don't have it by thread; just stupidly as totals, like this:

calls: ttfp           11290
time: ttfp        558191
calls: shrink_inactive_list isolated       xxx
time : shrink_inactive_list isolated            xxx
calls: shrink_inactive_list lumpy reclaim       xxx
time : shrink_inactive_list lumpy reclaim          xxx
calls: balance_pgdat                                xxx
time : balance_pgdat                                xxx
calls: alloc_pages_high_priority                    xxx
time : alloc_pages_high_priority                    xxx
calls: alloc_pages_slowpath                         xxx
time : alloc_pages_slowpath                         xxx
calls: throttle_vm_writeout                         xxx
time : throttle_vm_writeout                         xxx
calls: balance_dirty_pages                          xxx
time : balance_dirty_pages                         xxx

Note that the "call" points above are from a very old (2.6.34 +
backports) kernel, but you get the idea.  We just wrap
congestion_wait() with a routine that takes a 'type' parameter; does
the congestion_wait(); and increments the appropriate 'call' stat, and
adds to the appropriate 'time' stat the return value from
congestion_wait().

For a given workload, you can get an idea for where congestion is
adding to delays.  I really think that for IO-less
balance_dirty_pages(), we need some insight into how long writer
threads are being throttled.  And tracepoints are great, but not
sufficient, IMHO.

Thanks,
Curt

>
>                                                                Honza
>
>> >> machine global data for how many pages were cleaned for
>> >> which reasons.  It also displays some additional counts for
>> >> various writeback events.
>> >>
>> >> These data are also available for each BDI, in
>> >> /sys/block/<device>/bdi/writeback_stats .
>> >
>> >> Sample output:
>> >>
>> >>    page: balance_dirty_pages           2561544
>> >>    page: background_writeout              5153
>> >>    page: try_to_free_pages                   0
>> >>    page: sync                                0
>> >>    page: kupdate                        102723
>> >>    page: fdatawrite                    1228779
>> >>    page: laptop_periodic                     0
>> >>    page: free_more_memory                    0
>> >>    page: fs_free_space                       0
>> >>    periodic writeback                      377
>> >>    single inode wait                         0
>> >>    writeback_wb wait                         1
>> >
>> > That's already useful data, and could be further extended (in
>> > future patches) to answer questions like "what's the writeback
>> > efficiency in terms of effective chunk size?"
>> >
>> > So in future there could be lines like
>> >
>> >    pages: balance_dirty_pages           2561544
>> >    chunks: balance_dirty_pages          XXXXXXX
>> >    works: balance_dirty_pages           XXXXXXX
>> >
>> > or even derived lines like
>> >
>> >    pages_per_chunk: balance_dirty_pages         XXXXXXX
>> >    pages_per_work: balance_dirty_pages          XXXXXXX
>> >
>> > Another question is, how can the display format be script friendly?
>> > The current form looks not easily parse-able at least for "cut"..
>>
>> I suppose you mean because of the variable number of tokens.  Yeah,
>> this can be hard.  Of course, I always just use "awk '{print $NF}'"
>> and it works for me :-) .  But I'd be happy to change these to use a
>> consistent # of args.
>>
>> Thanks,
>> Curt
>>
>>
>> > Thanks,
>> > Fengguang
>> >
>
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux