I want to restart the discussion we had last July (!) about an NFS hard read-only mount option. A common use case of union mounts is a cluster with NFS mounted read-only root file systems, with a local fs union mounted on top. Here's the last discussion we had: http://kerneltrap.org/mailarchive/linux-fsdevel/2009/7/16/6211043/thread We can assume a local mechanism that lets the server enforce the read-only-ness of the file system on the local machine (the server can increment sb->s_hard_readonly_users on the local fs and the VFS will take care of the rest). The main question is what to do on the client side when the server changes its mind and wants to write to that file system. On the server side, there's a clear synchronization point: sb->s_hard_readonly_users needs to be decremented, and so we don't have to worry about a hard readonly exported file system going read-write willy-nilly. But the client has to cope with the sudden withdrawal of the read-only guarantee. A lowest common denominator starting point is to treat it as though the mount went away entirely, and force the client to remount and/or reboot. I also have vague ideas about doing something smart with stale file handles and generation numbers to avoid a remount. This looks a little bit like the forced umount patches too, where we could EIO any open file descriptors on the old file system. How long would it take to implement the dumb "NFS server not responding" version? -VAL -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html