Re: ext4 scaling limits ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





Am 21.03.2017 um 22:48 schrieb Andreas Dilger:
While it is true that e2fsck does not free memory during operation, in
practice this is not a problem. Even for large filesystems (say 32-48TB)
it will only use around 8-12GB of RAM so that is very reasonable for a
server today.

no it's not reasonable even today that your whole physical machine exposes it's total RAM to the one of many single virtual machines running just a samba server for a 50 TB "datagrave" with a handful of users

in reality it should not be a problem to attach even a 100 TB storage to a VM with 1-2 GB

The rough estimate that I use for e2fsck is 1 byte of RAM per block.

Cheers, Andreas

On Mar 21, 2017, at 16:07, Manish Katiyar <mkatiyar@xxxxxxxxx> wrote:

Hi,

I was looking at e2fsck code to see if there are any limits on running
e2fsck on large ext4 filesystems. From the code it looks like all the
required metadata while e2fsck is running is only kept in memory and
is only flushed to disk when the appropriate changes are corrected.
(Except the undo file case).
There doesn't seem to be a case/code where we have to periodically
flush some tracking metadata while it is running, just because we have
too much of incore tracking data and may ran out of memory (looks like
code will simply return failure if ext2fs_get_mem() returns failure)

Appreciate if someone can confirm that my understanding is correct ?



[Index of Archives]     [Reiser Filesystem Development]     [Ceph FS]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite National Park]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]     [Linux Media]

  Powered by Linux