Re: memory requirements for a 400TB fs with reflinks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Mar 23, 2021 at 10:39:27AM +0100, Ralf Groß wrote:
> Hi Dave,
> 
> > People are busy, and you posted on a weekend. Have some patience,
> > please.
> 
> I know, sorry for that.
> 
> > ....
> >
> > So there's no one answer - the amount of RAM xfs_repair might need
> > largely depends on what you are storing in the filesystem.
> 
> I just checked our existing backups repositories. The backup files are
> VMware image backup files, daily ones are smaller, weekly/GFS larger.
> But the are not millions of smaller files. For primary backup there
> are ~25.000 files in 68 TB of a 100 TB share, for a new repository
> with a 400 TB fs this would result in ~150.000 files. For the the
> secondary copy repository I see 3000 files in a 100 TB share. This
> would there result in ~200.000 files in a 700 TB repository. Is there
> any formula to calculate the memory requirement for a number of files?

Worse case static data indexing memory usage can be reported by
xfs_repair itself by abusing verbose reporting and memory limiting.
A 500TB filesystem with 50 million zero length files in it:

# xfs_repair -n -vvv -m 1 /dev/vdc
Phase 1 - find and verify superblock...
        - reporting progress in intervals of 15 minutes
        - max_mem = 1024, icount = 51221120, imem = 200082, dblock = 134217727500, dmem = 65535999
Required memory for repair is greater that the maximum specified
with the -m option. Please increase it to at least 64244.
#

Says that worst case it is going to need "dmem = 65535999" to index
the space usage. That's 64GB of RAM. Inode based requirements are
"imem = 200082" another 200MB for indexing 50 million inodes. Of
course, there is the inodes themselves and all the other metadata
that need to be brought into RAM, but that is typically paged in and
out of the buffer cache that is not actually included in these
memory usage counts.

So for a 500GB filesystem with minimal metadata and large contiguous
files as you describe you're probably only going to need a few GB of
RAM to repair it. OF course, if things get broken, then you should
plan for worst case minimums as described by xfs_repair above...

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx



[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux