On Tue 17-01-17 21:49:45, Matthew Wilcox wrote: > 1. Exploiting multiorder radix tree entries. I believe we would do well > to attempt to allocate compound pages, insert them into the page cache, > and expect filesystems to be able to handle filling compound pages with > ->readpage. It will be more efficient because alloc_pages() can return > large entries out of the buddy list rather than breaking them down, > and it'll help reduce fragmentation. Kirill has patches to do this and I don't like the complexity it adds to pagecache handling code and each filesystem that would like to support this. I don't have objections to the general idea but the complexity of the current implementation just looks too big to me... > 2. Supporting filesystem block sizes > page size. Once we do the above > for efficiency, I think it then becomes trivial to support, eg 16k block > size filesystems on x86 machines with 4k pages. Heh, you wish... :) There's a big difference between opportunistically allocating a huge page and reliably have to provide high order page. Memory fragmentation issues will be difficult to deal with... Honza -- Jan Kara <jack@xxxxxxxx> SUSE Labs, CR -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html