On Wed, Oct 16, 2013 at 03:32:00PM +0200, Stefanita Rares Dumitrescu wrote: > Quick update: > > The xfsprogs from the centos6 yum are newer and they don't use that > much memory, however i got 2 segfaults and the process stopped. > > I cloned the xfsprogs git and i am running it now with the new 15 gb > swap that i created, and this is a monster in memory usage. > > Pretty bit of discrepancy. Not if the centos 6 version is segfaulting before it gets to the stage that consumes all the memory. From your subsequent post, you have 76 million inodes in the filesystem. If xfs_repair has to track all those inodes as part of the recovery (e.g. you lost the root directory), then it has to index them all in memory. Most people have no idea how much disk space this amount of metadata consumes and hence why xfs_repair might run out of memory. For example, an newly created 100TB filesystem with 50 million zero length files in it consumes 28GB of space in metadata. You've got 50% more inodes than that, so you've xfs_repair is probably walking in excess of 40GB of metadata in your filesystem. If a significant portion of that metadata is corrupt, then repair needs to hold both the suspicious metadata and a cross reference index in memory to be able to rebuild it all. Hence when you have etns of gigabytes of metadata, xfs_repair can need tens of GB of RAM to be able to repair it. There's simply no easy way around this. Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs