On Tue, Nov 13, 2007 at 12:30:05AM +0000, David Howells wrote: > Nick Piggin <npiggin@xxxxxxx> wrote: > > > PAGE_CACHE_SIZE should be used to address the pagecache. > > Perhaps, but the function being called from there takes pages not page cache > slots. If I have to allow for PAGE_CACHE_SIZE > PAGE_SIZE then I need to > modify my code, if not then the assertion needs to remain what it is. It takes a pagecache page, yes. If you follow convention, you use PAGE_CACHE_SIZE for that guy. You don't have to allow PAGE_CACHE_SIZE != PAGE_SIZE, and if all the rest of your code is in units of PAGE_SIZE, then obviously my changing of just the one unit is even more confusing than the current arrangement ;) > > > I notice you removed the stuff that clears holes in the page to be > > > written. Is this is now done by the caller? > > > > It is supposed to bring the page uptodate first. So, no need to clear > > AFAIKS? > > Hmmm... I suppose. However, it is wasteful in the common case as it is then > bringing the page up to date by filling/clearing the whole of it and not just > the bits that are not going to be written. Yes, that's where you come in. You are free (and encouraged) to optimise this. Let's see, for a network filesystem this is what you could do: - if the page is not uptodate at write_begin time, do not bring it uptodate (at least, not the region that is going to be written to) - if the page is not uptodate at write_end time, but the copy was fully completed, just mark it uptodate (provided you brought the regions outside the copy uptodate). - if the page is not uptodate and you have a short copy, simply do not mark the page uptodate or dirty, and return 0 from write_end, indicating that you have committed 0 bytes. The generic code should DTRT. Or you could: Pass back a temporary (not pagecache) page in *pagep, and copy that yourself into the _real_ pagecache page at write_end time, so you know exactly how big the copy will be (this is basically what the 2copy method does now... it is probably not as good as the first method I described, but for a high latency filesystem it may be preferable to always bringing the page uptodate). Or: keep track of sub-page dirty / uptodate status eg. with a light weight buffer_head like structure, so you can actually have partially dirty pages that are not completely uptodate. Or... if you can think of another way... - To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html