Re: [xfs_check Out of memory: ]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Friday 27 of December 2013, Dave Chinner wrote:
> On Fri, Dec 27, 2013 at 09:07:22AM +0100, Arkadiusz Miśkiewicz wrote:
> > On Friday 27 of December 2013, Jeff Liu wrote:
> > > On 12/27 2013 14:48 PM, Stor?? wrote:
> > > > Hey:
> > > > 
> > > > 20T xfs file system
> > > > 
> > > > 
> > > > 
> > > > /usr/sbin/xfs_check: line 28: 14447 Killed
> > > > xfs_db$DBOPTS -i -p xfs_check -c "check$OPTS" $1
> > > 
> > > xfs_check is deprecated and please use xfs_repair -n instead.
> > > 
> > > The following back traces show us that it seems your system is run out
> > > memory when executing xfs_check, thus, snmp daemon/xfs_db were killed.
> > 
> > This reminds me a question...
> > 
> > Could xfs_repair store its temporary data (some of that data, the biggest
> > parte) on disk instead of in memory?
> 
> Where on disk? 

In directory/file that I'll tell it to use (since I usualy have few xfs 
filesystems on single server and so far only one at a time breaks).

> We can't write to the disk until we've verified all
> the free space is really free space, and guess what uses all the
> memory? Besides, if the information is not being referenced
> regularly (and it usually isn't), then swap space is about as
> efficient as any database we might come up with...

It's not about efficiency. It's about not killing the system (by not eating 
all memory, OOM). If I can (optionally) trade repair speed for not eating ram 
then it's desired sometimes. Better to have slow repair than no repair 8)

Could xfs_repair tell kernel that this data should always end up on swap first 
(allowing other programs/daemons to use regular memory) prehaps? (Don't know 
interface that would allow to do that in kernel though). That would be some 
half baked solution.

> > I don't know it that would make sense, so asking. Not sure if xfs_repair
> > needs to access that data frequently (so on disk makes no sense) or
> > maybe it needs only for iteration purposes in some later phase (so on
> > disk should work).
> > 
> > Anyway memory usage of xfs_repair was always a problem for me (like 16GB
> > not enough for 7TB fs due to huge amount of fies being stored). With
> > parallel scan it's even worse obviously.
> 
> Yes, your problem is that the filesystem you are checking contains
> 40+GB of metadata and a large amount of that needs to be kept in
> memory from phase 3 through to phase 6.

Is that data (or most of that data) frequenly accessed? Or something that's 
iterated over let say once in each phase? 


Anyway current "fun" with repair and huge filesystems looks like this:
- 16GB of memory, run xfs_repair, system goes into unusable state because 
whole ram is eaten (ends up with OOM); wait several hours
- reboot, add 20GB of swap, run xfs_repair, the same happens again; wait half 
a day
- reboot, add another 20GB of swap space, run xfs repair - success!; wait 
another day
- in all steps system is simply unusable for other services. Nothing else will 
work since entire ram gets eaten by repair. So doesn't help me to have 4 xfs 
filesystems and only one broken - have to shut down all services only for that 
repair to work
- with parallel git repair it is even worse obviously (OOM happens sooner than 
later)
- can't add more RAM easily, machine is at remote location, uses obsolete 
DDR2, have no more ram slots and so on
- total repair time for all that steps is few times longer than neccessary 
(successful repair took 7.5h while all these steps took 2 days)
- what's worse tools give no estimations of ram needed etc but that's afaik 
unfixable. This means that it is not known how much memory will be needed. You 
need to run repair and see. Also if more files gets stored then next repair in 
few monts could require twice more ram. You never know what to expect.

Now how to prevent these problems? Currently I see only one "solution" - add 
more RAM.

Unfortunately that's not a sloution - won't work in many cases described 
above.

So looks like my future backup servers will need to have 64GB, 128GB or maybe 
even more ram that will be there only for xfs_repair usage. That's gigantic 
waste of resources. And there are modern processors that don't work with more 
than 32GB of ram - like "Intel Xeon E3-1220v2" ( http://tnij.org/tkqas9e ). So 
adding ram means replacing CPU, likely replacing mainboard. Fun :)

> If you really want to add
> some kind of database interface to store this information somewhere
> else, then I'll review the patches. ;)

Right. So only "easy" task finding the one who understands the code and can 
write such interface left. Anyone?

IMO ram usage is a real problem for xfs_repair and there has to be some 
upstream solution other than "buy more" (and waste more) approach.

> Cheers,
> 
> Dave.

-- 
Arkadiusz Miśkiewicz, arekm / maven.pl

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs





[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux