Re: Safe XFS limits (100TB+)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for the fast reply Eric!

It's good to know I'm not completely off my chop :-)

288T! Wow, OK. Is that safe and workable when it starts to get full?
(say around 70% utilized)

In this case the machine will have 128G memory and a dual E5 Xeon. The
filesystem will have a fair number of files, but they're all going to
be a fair few G.

Regarding xfs_repair/check, thanks for the heads up. I've only had to
use repair once due to some mess caused by unsupported with an
unsupported firmware with an Adaptec 71605 (learned my lesson quickly
there).

I wonder if my eyes are too big for my stomach now? Why have 144TB x2
if you can have one big filesystem...

My fstab options are as such:

rw,nobarrier,inode64

Also, FWIW I didn't partition the device last time, just ended up with...

/dev/sdb: UUID="ccac4134-12a0-4dbd-9365-d2e166d927ed" TYPE="xfs"


On Thu, Feb 2, 2017 at 5:16 PM, Eric Sandeen <sandeen@xxxxxxxxxxx> wrote:
> On 2/2/17 10:46 AM, fuser ct1 wrote:
>> Hello list.
>>
>> Despite searching I couldn't find guidance, or many use cases, regarding
>> XFS beyond 100TB.
>>
>> Of course the filesystem limits are way beyond this, but I was looking for
>> real world experiences...
>>
>> Specifically I'm wondering about the sanity of using XFS with a couple of
>> 144TB block devices (my system will have two 22x8TB R60 in a 44 bay JBOD).
>> My storage is used for video editing/post production.
>>
>> * Has anybody here tried?
>
> XFS has been used well past 100T, sure.
>
>> * What is the likelihood of xfs_repair/check finishing if I ever needed to
>> run it?
>
> xfs_check no, but it's deprecated anyway because it doesn't scale.
>
> xfs_repair yes, though the amount of resources needed will depend on
> the details of how you populate the filesystem.
>
> On my puny celeron with 8g ram, xfs_repair of an empty 288T image file
> takes 2 seconds.  Filling it with files will change this :)
> But if it's for video editing I presume you will actually be fairly
> light on the metadata, with a not-insane number of inodes, and very
> large files.
>
> But you do want to make sure that the machine administering the filesystem
> is fairly beefy, for xfs_repair purposes.
>
> http://xfs.org/index.php/XFS_FAQ#Q:_Which_factors_influence_the_memory_usage_of_xfs_repair.3F
>
>> * Am I nuts?
>
> Probably not.   :)
>
> -Eric
>
>> I know that beyond a certain point I should be looking at a scale out
>> option, but the level of complexity and cost goes up considerably.
>>
>> More info:
>> =======
>>
>> Previously I had an 80TB usable (96TB raw) with an LSI MegaRAID 9361
>> controller. This worked very nicely and was FAST. I was careful to choose
>> the inode64 fstab mount option. The OS was Debian Jessie, which has
>> XFSPROGS version 3.2.1.
>>
>> Thanks in advance and sorry if this is not the right list.
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
--
To unsubscribe from this list: send the line "unsubscribe linux-xfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux