On Wed, Jan 05, 2011 at 12:26:33PM -0700, Andreas Dilger wrote: > > How does this change impact the majority of users that are running > with a journal? It is clearly a win for a small percentage of users > with no-journal mode, but it may be a net increase in memory usage > for the majority of the users (with journal). There will now be two > allocations for every inode, and the extra packing these allocations > into slabs will increase memory usage for an inode, and would > definitely result in more allocation/freeing overhead. > > The main question is how many files are ever opened for write? Even if we do two allocations for every inode (not just inodes opened for write), it's a win simply because moving the jinode out the ext4_inode_info structure shrinks it sufficiently that we can now pack 18 inodes in a 16k slab on x86_64. It turns out that the slab allocator is pretty inefficient at large data structures, and smaller data structures (such as the jbd2_inode structure) it handles much more efficiently, in terms of wasted memory. > It > isn't just the number of currently-open files for write, because the > jinfo isn't released until the inode is cleared from memory. While > I suspect that most inodes in cache are never opened for write, it > would be worthwhile to compare the ext4_inode_cache object count > against the jbd2_inode object count, and see how the total memory > compares to a before-patch system running different workloads (with > journal). Sure. It should be possible to release jinfo when the file is completely closed, in ext4_release_file. That would reduce the memory footprint significantly. I hadn't bothered with it too badly because the jbd2_inode structure is only 48 bytes, and you can fit 85 of them on a 4k page with only 16 bytes getting wasted. But it's fair that we release jinode once the inode is no longer used by any file descriptors. I'll make the the other changes you suggested; thanks!! - Ted -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html