Re: memory requirements for a 400TB fs with reflinks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Ralf,

Am Montag, dem 22.03.2021 um 17:50 +0100 schrieb Ralf Groß:
> No advice or rule of thumb regarding needed memory for xfs_repair?

xfs_repair can be quite a memory hog, however the memory requirements
are mostly related to the amount of metadata in the FS, not so much
with the overall size of the FS. So a small FS with a ton of small
files will require much more RAM on a repair run than a big FS with
only a few big files.

However, xfs_repair makes linear passes over its workingset, so it
works really well with swap. Our backupservers are handling filesystems
with ~400GB of metadata (size of the metadump) and are only equipped
with 64GB RAM. For the worst-case where a xfs_repair run might be
needed they simply have a 1TB SSD to be used as swap for the repair
run.

Regards,
Lucas

> Ralf
> 
> 
> Am Sa., 20. März 2021 um 19:01 Uhr schrieb Ralf Groß <ralf.gross+xfs@xxxxxxxxx>:
> > 
> > Hi,
> > 
> > I plan to deploy a couple of Linux (RHEL 8.x) server as Veeam backup
> > repositories. Base for this might be high density server with 58 x
> > 16TB disks, 2x  RAID 60, each with its own raid controller and 28
> > disks. So each RAID 6 has 14 disks, + 2 globale spare.
> > 
> > I wonder what memory requirement such a server would have, is there
> > any special requirement regarding reflinks? I remember that xfs_repair
> > has been a problem in the past, but my experience with this is from 10
> > years ago. Currently I plan to use 192GB RAM, this would be perfect as
> > it utilizes 6 memory channels and 16GB DIMMs are not so expensive.
> > 
> > Thanks - Ralf





[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux