On Nov 14, 2006 14:41 +0100, Ihar `Philips` Filipau wrote: > More I'm thinking about that, more I'm convinced that some sort of > compromise is required. E.g. file system with 2/more cluster sizes: > for example 4k for small/medium files, 64+k cluster for large files. > Files of 100+MB sizes are not that rare anymore (home video/audio > processing now is affordable as never before). But on other side tiny > files like e.g. found in /etc or ~/.kde/* are not going to disappear > anytime soon. Well, current plan is that new allocator (mballoc + delalloc) from Alex Tomas will do efficient in-memory allocation of many contiguous blocks, and extents format will allow efficient on-disk storage of many contiguous blocks, so benefit of larger cluster size in disk format is minimal. Essentially, delaying the disk allocation until a file is large/complete (delalloc) and then using a buddy allocator in memory to get contiguous chunks of disk is better than hard 64K+ cluster because it avoids internal fragmentation and allows much more optimal/efficient placement than just a factor of 16 reduction in the block count. Cheers, Andreas -- Andreas Dilger Principal Software Engineer Cluster File Systems, Inc. - To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html