Re: large fs testing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On May 26, 2009  13:47 -0400, Ric Wheeler wrote:
> These runs were without lazy init, so I would expect to be a little more 
> than twice as slow as your second run (not the three times I saw) 
> assuming that it scales linearly.

Making lazy_itable_init the default formatting option for ext4 is/was
dependent upon the kernel doing the zeroing of the inode table blocks
at first mount time.  I'm not sure if that was implemented yet.

> This run was with limited DRAM on the 
> box (6GB) and only a single HBA, but I am afraid that I did not get any 
> good insight into what was the bottleneck during my runs.

For a very large array (80TB) this could be 1TB or more of inode tables
that are being zeroed out at format time.  After 64TB the default mke2fs
options will cap out at 4B inodes in the filesystem.  1TB/90min ~= 200MB/s
so this is probably your bottleneck.

> Do you have any access to even larger storage, say the mythical 100TB :-) 
> ? Any insight on interesting workloads?

I would definitely be most interested in e2fsck performance at this scale
(RAM usage and elapsed time) because this will in the end be the defining
limit on how large a usable filesystem can actually be in practise.

Cheers, Andreas
--
Andreas Dilger
Sr. Staff Engineer, Lustre Group
Sun Microsystems of Canada, Inc.

--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux