Re: mkfs.xfs states log stripe unit is too large

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 2 Jul 2012 02:18:27 -0400 Christoph Hellwig <hch@xxxxxxxxxxxxx> wrote:

> Ping to Neil / the raid list.

Thanks for the reminder.

> 
> On Tue, Jun 26, 2012 at 04:02:17AM -0400, Christoph Hellwig wrote:
> > On Tue, Jun 26, 2012 at 12:30:59PM +1000, Dave Chinner wrote:
> > > You can't, simple as that. The maximum supported is 256k. As it is,
> > > a default chunk size of 512k is probably harmful to most workloads -
> > > large chunk sizes mean that just about every write will trigger a
> > > RMW cycle in the RAID because it is pretty much impossible to issue
> > > full stripe writes. Writeback doesn't do any alignment of IO (the
> > > generic page cache writeback path is the problem here), so we will
> > > lamost always be doing unaligned IO to the RAID, and there will be
> > > little opportunity for sequential IOs to merge and form full stripe
> > > writes (24 disks @ 512k each on RAID6 is a 11MB full stripe write).
> > > 
> > > IOWs, every time you do a small isolated write, the MD RAID volume
> > > will do a RMW cycle, reading 11MB and writing 12MB of data to disk.
> > > Given that most workloads are not doing lots and lots of large
> > > sequential writes this is, IMO, a pretty bad default given typical
> > > RAID5/6 volume configurations we see....
> > 
> > Not too long ago I benchmarked out mdraid stripe sizes, and at least
> > for XFS 32kb was a clear winner, anything larger decreased performance.
> > 
> > ext4 didn't get hit that badly with larger stripe sizes, probably
> > because they still internally bump the writeback size like crazy, but
> > they did not actually get faster with larger stripes either.
> > 
> > This was streaming data heavy workloads, anything more metadata heavy
> > probably will suffer from larger stripes even more.
> > 
> > Ccing the linux-raid list if there actually is any reason for these
> > defaults, something I wanted to ask for a long time but never really got
> > back to.
> > 
> > Also I'm pretty sure back then the md default was 256kb writes, not 512
> > so it seems the defaults further increased.

"originally" the default chunksize was 64K.
It was changed in late 2009 to 512K - this first appeared in mdadm 3.1.1

I don't recall the details of why it was changed but I'm fairly sure that
it was based on measurements that I had made and measurements that others had
made.  I suspect the tests were largely run on ext3.

I don't think there is anything close to a truly optimal chunk size.  What
works best really depends on your hardware, your filesystem, and your work
load. 

If 512K is always suboptimal for XFS then that is unfortunate but I don't
think it is really possible to choose a default that everyone will be happy
with.  Maybe we just need more documentation and warning emitted by various
tools.  Maybe mkfs.xfs could augment the "stripe unit too large" message with
some text about choosing a smaller chunk size?

NeilBrown



> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> > the body of a message to majordomo@xxxxxxxxxxxxxxx
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> ---end quoted text---
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

Attachment: signature.asc
Description: PGP signature

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs

[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux