Re: [xfs_check Out of memory: ]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/27/2013 5:20 PM, Arkadiusz Miśkiewicz wrote:
...
> - can't add more RAM easily, machine is at remote location, uses obsolete 
> DDR2, have no more ram slots and so on
...
> So looks like my future backup servers will need to have 64GB, 128GB or maybe 
> even more ram that will be there only for xfs_repair usage. That's gigantic 
> waste of resources. And there are modern processors that don't work with more 
> than 32GB of ram - like "Intel Xeon E3-1220v2" ( http://tnij.org/tkqas9e ). So 
> adding ram means replacing CPU, likely replacing mainboard. Fun :)
..
> IMO ram usage is a real problem for xfs_repair and there has to be some 
> upstream solution other than "buy more" (and waste more) approach.

The problem isn't xfs_repair.  The problem is that you expect this tool
to handle an infinite number of inodes while using a finite amount of
memory, or at least somewhat less memory than you have installed.  We
don't see your problem reported very often which seems to indicate your
situation is a corner case, or that others simply size their systems
properly without complaint.

If you'd actually like advice on how to solve this, today, with
realistic solutions, in lieu of the devs recoding xfs_repair for the
single goal of using less memory, then here are your options:

1.  Rewrite or redo your workload to not create so many small files,
    so many inodes, i.e. use a database
2.  Add more RAM to the system
3.  Add an SSD of sufficient size/speed for swap duty to handle
    xfs_repair requirements for filesystems with arbitrarily high
    inode counts

Your quickest, cheapest, and all encompassing solution to this problem
today is #3.  This prevents the need to size the RAM on each machine to
meet the needs of xfs_repair given an arbitrary number of inodes, as
you'll always have more than enough swap.  And it is likely less
expensive than adding/replacing DIMMs.  The fastest random read/write
IOPS SSD on the market is the Samsung 840 Pro which is ~$1/GB in the
States, a 128GB unit for $130.  This unit has a 5 year warranty and
sustained ~90K read/write 4KB IOPS.

Create a 100GB swap partition and leave the remainder unallocated.  The
unallocated space will automatically be used for GC and wear leveling,
increasing the life of all cells in the drive.

The fact that the systems are remote, that you have no more DIMM slots,
are not good arguments for you to make in this context.  Every system
will require some type of hardware addition/replacement/maintenance.
And this is not the first software "problem" that requires more hardware
to solve.  If your application that creates these millions of files
needed twice as much RAM, forcing an upgrade, would you be complaining
this way on their mailing list?  If so I'd suggest the problem lay
somewhere other than xfs_repair and that application.

-- 
Stan

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs





[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux