Re: XFS: Abysmal write performance because of excessive seeking (allocation groups to blame?)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Instead, a far more optimal solution would be to set aside 4 spares per
> chassis and create 14 four drive RADI10 arrays.  This would yield ~600
> seeks/sec and ~400MB/s sequential throughput performance per 2 spindle
> array.  We'd stitch the resulting 56 hardware RAID10 arrays together in
> an mdraid linear (concatenated) array.  Then we'd format this 112
> effective spindle linear array with simply:
>
> $ mkfs.xfs -d agcount=56 /dev/md0
>
> Since each RAID10 is 900GB capacity, we have 56 AGs of just under the
> 1TB limit, 1 AG per 2 physical spindles.  Due to the 2 stripe spindle
> nature of the constituent hardware RAID10 arrays, we don't need to worry
> about aligning XFS writes to the RAID stripe width.  The hardware cache
> will take care of filling the small stripes.  Now we're in the opposite
> situation of having too many AGs per spindle.  We've put 2 spindles in a
> single AG and turned the seek starvation issues on its head.

So it sounds like that for poor guys like us, who can’t afford the
hardware to have dozens of spindles, the best option would be to
create the XFS file system with agcount=1? That seems to be the only
reasonable conclusion to me, since a single RAID device, like a single
disk, cannot write in parallel anyway.

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs



[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux