Re: gather write metrics on multiple files

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Oct 09, 2014 at 12:24:20AM -0500, Stan Hoeppner wrote:
> On 10/08/2014 11:49 PM, Joe Landman wrote:
> > On 10/09/2014 12:40 AM, Stan Hoeppner wrote:
> >> Does anyone know of a utility that can track writes to files in
> >> an XFS directory tree, or filesystem wide for that matter, and
> >> gather filesystem blocks written per second data, or simply
> >> KiB/s, etc?  I need to analyze an application's actual IO
> >> behavior to see if it matches what I'm being told the
> >> application is supposed to be doing.
> >>
> > 
> > We've written a few for this purpose (local IO probing).
> > 
> > Start with collectl (looks at /proc/diskstats), and others.  Our
> > tools go to /proc/diskstats, and use this to compute BW and IOPs
> > per device.
> > 
> > If you need to log it for a long time, set up a time series
> > database (we use influxdb and the graphite plugin).  Then grab
> > your favorite metrics tool that talks to graphite/influxdb (I
> > like https://github.com/joelandman/sios-metrics for obvious
> > reasons), and start collecting data.
> 
> I'm told we have 800 threads writing to nearly as many files
> concurrently on a single XFS on a 12+2 spindle RAID6 LUN.
> Achieved data rate is currently ~300 MiB/s.  Some of these are
> files are supposedly being written at a rate of only 32KiB every
> 2-3 seconds, while some (two) are ~50 MiB/s.  I need to determine
> how many bytes we're writing to each of the low rate files, and
> how many files, to figure out RMW mitigation strategies.  Out of
> the apparent 800 streams 700 are these low data rate suckers, one
> stream writing per file.  
> 
> Nary a stock RAID controller is going to be able to assemble full
> stripes out of these small slow writes.  With a 768 KiB stripe
> that's what, 24 seconds to fill it at 2 seconds per 32 KiB IO?

Raid controllers don't typically have the resources to track
hundreds of separate write streams at a time. Most don't have the
memory available to track that many active write streams, and those
that do probably can't proritise writeback sanely given how slowly
most cachelines would be touched. The fast writers would simply tune
over the slower writer caches way too quickly.

Perhaps you need to change the application to make the slow writers
buffer stripe sized writes in memory and flush them 768k at a
time...

> I've been playing with bcache for a few days but it actually drops
> throughput by about 30% no matter how I turn its knobs.  Unless I
> can get Kent to respond to some of my questions bcache will be a
> dead end.  I had high hopes for it, thinking it would turn these
> small random IOs into larger sequential writes.  It may actually
> be doing so, but it's doing something else too, and badly.  IO
> times go through the roof once bcache starts gobbling IOs, and
> throughput to the LUNs drops significantly even though bcache is
> writing 50-100 MIB/s to the SSD.  Not sure what's causing that.

Have you tried dm-cache?

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs




[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux