Re: 2GB memory limit running fsck on a +6TB device

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jun 09, 2008 at 07:33:48PM +0200, santi@xxxxxxxxxxxx wrote:
> It's a backup storage server, with more than 113 million files, this's the
> output of "df -i":
> 
> Appears that fsck is trying to use more than 2GB memory to store inode
> table relationship. System has 4GB of physical RAM and 4GB of swap, is
> there anyway to limit the memory used by fsck or any solution to check this
> filesystem? Running fsck with a 64bit LiveCD will solve the problem?

Yes, running with a 64-bit Live CD is one way to solve the problem.

If you are using e2fsprogs 1.40.10, there is another solution that may
help.  Create an /etc/e2fsck.conf file with the following contents:

[scratch_files]
	directory = /var/cache/e2fsck

...and then make sure /var/cache/e2fsck exists by running the command
"mkdir /var/cache/e2fsck".

This will cause e2fsck to store certain data structures which grow
large with backup servers that have a vast number of hard-linked files
in /var/cache/e2fsck instead of in memory.  This will slow down e2fsck
by approximately 25%, but for large filesystems where you couldn't
otherwise get e2fsck to complete because you're exhausting the 2GB VM
per-process limitation for 32-bit systems, it should allow you to run
through to completion.

					- Ted

_______________________________________________
Ext3-users mailing list
Ext3-users@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/ext3-users

[Index of Archives]         [Linux RAID]     [Kernel Development]     [Red Hat Install]     [Video 4 Linux]     [Postgresql]     [Fedora]     [Gimp]     [Yosemite News]

  Powered by Linux