On Tue, 10 Jul 2007, Nick Piggin wrote: > > Hmmm.... I did not notice that yet but then I have not done much work > > there. > > Notice what? The bad code for the buffer heads. > > > - A real "nobh" mode. nobh was created I think mainly to avoid problems > > > with buffer_head memory consumption, especially on lowmem machines. It > > > is basically a hack (sorry), which requires special code in filesystems, > > > and duplication of quite a bit of tricky buffer layer code (and bugs). > > > It also doesn't work so well for buffers with non-trivial private data > > > (like most journalling ones). fsblock implements this with basically a > > > few lines of code, and it shold work in situations like ext3. > > > > Hmmm.... That means simply page struct are not working... > > I don't understand you. jbd needs to attach private data to each bh, and > that can stay around for longer than the life of the page in the pagecache. Right. So just using page struct alone wont work for the filesystems. > There are no changes to the filesystem API for large pages (although I > am adding a couple of helpers to do page based bitmap ops). And I don't > want to rely on contiguous memory. Why do you think handling of large > pages (presumably you mean larger than page sized blocks) is strange? We already have a way to handle large pages: Compound pages. > Conglomerating the constituent pages via the pagecache radix-tree seems > logical to me. Meaning overhead to handle each page still exists? This scheme cannot handle large contiguous blocks as a single entity? - To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html