Le Thu, 2 Feb 2017 16:46:09 +0000 fuser ct1 <fuserct1@xxxxxxxxx> écrivait: > Hello list. > > Despite searching I couldn't find guidance, or many use cases, > regarding XFS beyond 100TB. > > Of course the filesystem limits are way beyond this, but I was > looking for real world experiences... I manage and support several hosts I built and set up, some running for many years, with very large XFS volumes. Recent XFS volumes with XFS v5 seem to promise even more robustness, thanks to metadata checksums. Currently in use under heavy load machines with the following usable volumes, almost all of them using RAID 60 (21 to 28 drives x 2 or x3): 1 490 TB volume 3 390 TB volumes 1 240 TB volume 2 180 TB volumes 5 160 TB volumes 11 120 TB volumes 4 90 TB volumes 14 77 TB volumes many, many 50 and 40 TB volumes. 2x22 disks Raid 60 is perfectly OK, as long as you're using good disks. I only use HGST, and have a failure rate so low I don't even bother tracking it precisely anymore (like 2 or 3 failures a year among the couple thousands disks listed above). Use recent xfs progs and kernel, use xfs v5 if possible. Don't forget proper optimisations (use noop scheduler, enlarge nr_requests and read_ahead_kb a lot) for high sequential throughput (video is all about sequential throughput) and you should be happy and safe. xfs_repair on a filled fast 100 TB volume only needs 15 minutes or so. And it was after a very, very bad power event (someone connected a studio light to the UPS and brought everything down literally in flames). -- ------------------------------------------------------------------------ Emmanuel Florac | Direction technique | Intellique | <eflorac@xxxxxxxxxxxxxx> | +33 1 78 94 84 02 ------------------------------------------------------------------------
Attachment:
pgpMiApKICkBT.pgp
Description: Signature digitale OpenPGP