Re: bigalloc and max file size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Nov 01, 2011 at 01:39:34AM +0800, Coly Li wrote:
> In some application, we allocate a big file which occupies most space of a file system, while the file system built on
> (expensive) SSD. In such configuration, we want less blocks allocated for inode table and bitmap. If the max extent
> length could be much big, there is chance to have much less block groups, which results more blocks for regular file.
> Current bigalloc code does well already, but there is still chance to do better. The sys-admin team believe
> cluster-based-extent can help Ext4 to consume as less meta data memory as raw disk does, and gain as more available data
> blocks as raw disks does, too. This is a small number on one single SSD, but in our cluster environment, this effort can
> help to save a recognized amount of capex.

OK, but you're not running into the 16TB file size limitation, are
you?  That would be a lot of SSD's.  I assume the issue then is you
want to minimize the number of extents, limited by the 15-bit extent
length field?

What cluster size are you thinking about?  And how do you plan to
initialize it?  Via fallocate, or by explicitly writing zeros to the
whole file (so all of the blocks are marked as initialzied?  Is it
going to be sparse file?

	   	   	      	       - Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Reiser Filesystem Development]     [Ceph FS]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite National Park]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]     [Linux Media]

  Powered by Linux