Re: Safe XFS limits (100TB+)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Le Thu, 2 Feb 2017 18:48:50 +0000
fuser ct1 <fuserct1@xxxxxxxxx> écrivait:

> >I manage and support several hosts I built and set up, some running
> >for many years, with very large XFS volumes.
> >Recent XFS volumes with XFS v5 seem to promise even more robustness,
> >thanks to metadata checksums.  
> 
> Thanks this is good to know, although I think the distributions I use
> are at latest running 4.3.0+nmu1ubuntu1 for Ubuntu 16.04. Might go
> fishing in backports though.

4.3 should be good. XFS v5 requires at least x3.16.

> The checksum idea is interesting, I'll have a read - having worked
> with ZFS for some time too, it'll be interesting to see how this
> feature compares.

It's only metadata checksumming in XFS. Much faster (but of course less
safe; however you can scrub using the RAID controller, instead).

> >Currently in use under heavy load machines with the following usable
> >volumes, almost all of them using RAID 60 (21 to 28 drives x 2 or
> >x3):
> >
> >1 490 TB volume
> >3 390 TB volumes
> >1 240 TB volume
> >2 180 TB volumes
> >5 160 TB volumes
> >11 120 TB volumes
> >4 90 TB volumes
> >14 77 TB volumes
> >many, many 50 and 40 TB volumes.  
> 
> The 390TB thing looks tempting. With this LSI one could probably do 1x
> logical volume comprised of two spans of 22x R60, which would yield
> something like 288TB usable.

No, these are USABLE volumes. 390 TB is the usable volume of a 60
8TB drives chassis (480 TB), splitted in 2 x 29 drives + 2 spares. 

On most systems I use 2 controllers (one per array) for higher
performance (though it doesn't make that much of a difference with the
last generation).

> >2x22 disks Raid 60 is perfectly OK, as long as you're using good
> >disks. I only use HGST, and have a failure rate so low I don't even
> >bother tracking it precisely anymore (like 2 or 3 failures a year
> >among the couple thousands disks listed above).  
> 
> I've planned for 7K6 Ultrastar's. The HGST never give me much trouble.
> Sometimes I've had dead ones upon init, but that pretty normal I
> guess.

As the latest Backblaze report shows, not all Seagate drives are
bad, however all terrible hard disks  models come from Seagate...

> >Use recent xfs progs and kernel, use xfs v5 if possible. Don't forget
> >proper optimisations (use noop scheduler, enlarge nr_requests and
> >read_ahead_kb a lot) for high sequential throughput (video is all
> >about sequential throughput) and you should be happy and safe.  
> 
> Normally using NOOP, 1024 nr_requests and 8196 read ahead.

Good :)

> >xfs_repair on a filled fast 100 TB volume only needs 15 minutes or
> >so. And it was after a very, very bad power event (someone connected
> >a studio light to the UPS and brought everything down literally in
> >flames).  
> 
> Thanks that's really helpful to have a frame of reference!

It used to be much worse a few years back when xfs_repair demanded
gobbled RAM. I remember setting up additional swap space on USB drives
to be able to repair... That was wayyyyy slower back then :)

Given you have enough memory (32G or more), nowadays xfs_repair on a
huge filesystem is a breeze, even with gazillions of files (like DPX
or EXR images sequences....).

[I'm cc'ing  to the list because the information may help someone else
someday :) 

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@xxxxxxxxxxxxxx>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

Attachment: pgpgneY4uB2FJ.pgp
Description: Signature digitale OpenPGP


[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux