Re: [PATCH 2/2 v2] writeback: Add writeback stats for pages written

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



  Hi Curt,

On Mon 15-08-11 11:56:08, Curt Wohlgemuth wrote:
> On Mon, Aug 15, 2011 at 11:40 AM, Jan Kara <jack@xxxxxxx> wrote:
> > Regarding congestion_wait() statistics - do I get right that the numbers
> > gathered actually depend on the number of threads using the congested
> > device? They are something like
> >  \sum_{over threads} time_waited_for_bdi
> > How do you interpret the resulting numbers then?
> 
> I don't have it by thread; just stupidly as totals, like this:
> 
> calls: ttfp           11290
> time: ttfp        558191
> calls: shrink_inactive_list isolated       xxx
> time : shrink_inactive_list isolated            xxx
> calls: shrink_inactive_list lumpy reclaim       xxx
> time : shrink_inactive_list lumpy reclaim          xxx
> calls: balance_pgdat                                xxx
> time : balance_pgdat                                xxx
> calls: alloc_pages_high_priority                    xxx
> time : alloc_pages_high_priority                    xxx
> calls: alloc_pages_slowpath                         xxx
> time : alloc_pages_slowpath                         xxx
> calls: throttle_vm_writeout                         xxx
> time : throttle_vm_writeout                         xxx
> calls: balance_dirty_pages                          xxx
> time : balance_dirty_pages                         xxx
  Yes, that's what I was expecting.

> Note that the "call" points above are from a very old (2.6.34 +
> backports) kernel, but you get the idea.  We just wrap
> congestion_wait() with a routine that takes a 'type' parameter; does
> the congestion_wait(); and increments the appropriate 'call' stat, and
> adds to the appropriate 'time' stat the return value from
> congestion_wait().
  OK I see. I imagine that could be useful when you are monitoring your
systems or doing some long term observations.

> For a given workload, you can get an idea for where congestion is
> adding to delays.  I really think that for IO-less
> balance_dirty_pages(), we need some insight into how long writer
> threads are being throttled.  And tracepoints are great, but not
> sufficient, IMHO.
  Well, we are going to report computed delays via tracepoints which are
going to be prime interface for debugging but I agree that some statistics
could be useful as well and more lightweight (no need to pass lots of trace
data to userspace). OTOH I wonder if we shouldn't write a userspace tool
processing trace information from balance_dirty_pages() and generating
exactly those statistics you want in the kernel - something like writeback
tracer...

								Honza
-- 
Jan Kara <jack@xxxxxxx>
SUSE Labs, CR

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]