Re: NFSv4/pNFS possible POSIX I/O API standards

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Matthew Wilcox wrote:
On Tue, Dec 05, 2006 at 10:07:48AM +0000, Christoph Hellwig wrote:
The filehandle idiocy on the other hand is way of into crackpipe land.

Right, and it needs to be discarded.  Of course, there was a real
problem that it addressed, so we need to come up with an acceptable
alternative.
>
The scenario is a cluster-wide application doing simultaneous opens of
the same file.  So thousands of nodes all hitting the same DLM locks
(for read) all at once.  The openg() non-solution implies that all
nodes in the cluster share the same filehandle space, so I think a
reasonable solution can be implemented entirely within the clusterfs,
with an extra flag to open(), say O_CLUSTER_WIDE.  When the clusterfs
sees this flag set (in ->lookup), it can treat it as a hint that this
pathname component is likely to be opened again on other nodes and
broadcast that fact to the other nodes within the cluster.  Other nodes
on seeing that hint (which could be structured as "The child "bin"
of filehandle e62438630ca37539c8cc1553710bbfaa3cf960a7 has filehandle
ff51a98799931256b555446b2f5675db08de6229") can keep a record of that fact.
When they see their own open, they can populate the path to that file
without asking the server for extra metadata.

There's obviously security issues there (why I say 'hint' rather than
'command'), but there's also security problems with open-by-filehandle.
Note that this solution requires no syscall changes, no application
changes, and also helps a scenario where each node opens a different
file in the same directory.

I've never worked on a clusterfs, so there may be some gotchas (eg, how
do you invalidate the caches of nodes when you do a rename).  But this
has to be preferable to open-by-fh.

The openg() solution has the following advantages to what you propose. First, it places the burden of the communication of the file handle on the application process, not the file system. That means less work for the file system. Second, it does not require that clients respond to unexpected network traffic. Third, the network traffic is deterministic -- one client interacts with the file system and then explicitly performs the broadcast. Fourth, it does not require that the file system store additional state on clients.

In the O_CLUSTER_WIDE approach, a naive implementation (everyone passing the flag) would likely cause a storm of network traffic if clients were closely synchronized (which they are likely to be). We could work around this by having one application open early, then barrier, then have everyone else open, but then we might as well have just sent the handle as the barrier operation, and we've made the use of the O_CLUSTER_WIDE open() significantly more complicated for the application.

However, the application change issue is actually moot; we will make whatever changes inside our MPI-IO implementation, and many users will get the benefits for free.

The readdirplus(), readx()/writex(), and openg()/openfh() were all designed to allow our applications to explain exactly what they wanted and to allow for explicit communication. I understand that there is a tendency toward solutions where the FS guesses what the app is going to do or is passed a hint (e.g. fadvise) about what is going to happen, because these things don't require interface changes. But these solutions just aren't as effective as actually spelling out what the application wants.

Regards,

Rob
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux