On Friday 03 April 2009 04:05:07 David Howells wrote: > Nick Piggin <nickpiggin@xxxxxxxxxxxx> wrote: > > > Presumably: at the point where data is needed. > > But the point where the data is needed is where filemap.c is waiting on a > netfs page. Maybe the sync_page() aop can deal with it I was thinking of more like just have some threads submitting the reads and unlocking the waiters. > There's also the problem of recording and pinning the backing page I'm waiting > for. Currently I can do that by hooking the monitor block into the page > unlock watching list. If I don't do that, I have to use up yet more memory to > track those some other way. It's not impossible, but I'd like to keep memory > usage down. > > > Or do you actually have numbers showing a problem if you just read the pages > > then copy them? > > I did, years ago. It wasn't particularly good, but > fscache_read_or_alloc_pages() was completely synchronous. Performance wasn't good? Why? > > If there is a problem, then why doesn't fscache_read_or_alloc_pages caller > > do the work itself, then you get as many threads as you have indivisible > > work units, so completing some part of the request before another wouldn't > > gain you anything anyway... > > (1) Trond stipulated FS-Cache had to be asynchronous, and it is, as far as I > can make it. I still have to invoke bmap() synchronously, though, to find > out whether I have a page in the cache to read:-/ Why does it have to be asynchronous? Seems like an incredible complexity. ->readpage is called only in synchonous contexts, and ->readpages not but do you even have the netfs filesystem request readahead on the backing filesystem? (which then presumably tries to do readahead of its own) > (2) You lose the advantage of being able to process what you've got whilst the > disk is fetching stuff in the background. This should happen via readahead on the underlying filesystem, shouldn't it? -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html