Re: NFSv4/pNFS possible POSIX I/O API standards

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sage Weil wrote:
On Fri, 1 Dec 2006, Trond Myklebust wrote:
I'm quite happy with a proposal for a statlite(). I'm objecting to
readdirplus() because I can't see that it offers you anything useful.
You haven't provided an example of an application which would clearly
benefit from a readdirplus() interface instead of readdir()+statlite()
and possibly some tools for managing cache consistency.

Okay, now I think I understand where you're coming from.

The difference between readdirplus() and readdir()+statlite() is that (depending on the mask you specify) statlite() either provides the "right" answer (ala stat()), or anything that is vaguely "recent." readdirplus() would provide size/mtime from sometime _after_ the initial opendir() call, establishing a useful ordering. So without readdirplus(), you either get readdir()+stat() and the performance problems I mentioned before, or readdir()+statlite() where "recent" may not be good enough.

Instead of my previous example of proccess #1 waiting for process #2 to finish and then checking the results with stat(), imagine instead that #1 is waiting for 100,000 other processes to finish, and then wants to check the results (size/mtime) of all of them. readdir()+statlite() won't work, and readdir()+stat() may be pathologically slow.

Also, it's a tiring and trivial example, but even the 'ls -al' scenario isn't ideally addressed by readdir()+statlite(), since statlite() might return size/mtime from before 'ls -al' was executed by the user. One can easily imagine modifying a file on one host, then doing 'ls -al' on another host and not seeing the effects. If 'ls -al' can use readdirplus(), it's overall application semantics can be preserved without hammering large directories in a distributed filesystem.


I think that there are several points which are missing here.

First, readdirplus(), without any sort of caching, is going to be _very_
expensive, performance-wise, for _any_ size directory.  You can see this
by instrumenting any NFS server which already supports the NFSv3 READDIRPLUS
semantics.

Second, the NFS client side readdirplus() implementation is going to be
_very_ expensive as well.  The NFS client does write-behind and all this
data _must_ be flushed to the server _before_ the over the wire READDIRPLUS
can be issued.  This means that the client will have to step through every
inode which is associated with the directory inode being readdirplus()'d
and ensure that all modified data has been successfully written out.  This
part of the operation, for a sufficiently large directory and a sufficiently
large page cache, could take signficant time in itself.

These overheads may make this new operation expensive enough that no
applications will end up using it.

I agree that an interface which allows a userland process offer hints to
the kernel as to what kind of cache consistency it requires for file
metadata would be useful. We already have stuff like posix_fadvise() etc
for file data, and perhaps it might be worth looking into how you could
devise something similar for metadata.
If what you really want is for applications to be able to manage network
filesystem cache consistency, then why not provide those tools instead?

True, something to manage the attribute cache consistency for statlite() results would also address the issue by letting an application declare how weak it's results are allowed to be. That seems a bit more awkward, though, and would only affect statlite()--the only call that allows weak consistency in the first place. In contrast, readdirplus maps nicely onto what filesystems like NFS are already doing over the wire.

Speaking of applications, how many applications are there in the world,
or even being contemplated, which are interested in a directory of
files and whether or not this set of files has changed from the previous
snapshot of the set of files?  Most applications deal with one or two
files on such a basis, not multitudes.  In fact, having worked with
file systems and NFS in particular for more than 20 years now, I have
yet to hear of one.  This is a lot of work and complexity for very
little gain, I think.

Is this not a problem which be better solved at the application level?
Or perhaps finer granularity than "noac" for the NFS attribute caching?

   Thanx...

      ps
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux