Bruce, ----- Ursprüngliche Mail ----- > Von: "bfields" <bfields@xxxxxxxxxxxx> >> 1. auto-fsidnum >> In this mode mountd/exportd will create a new numerical fsid >> for a NFS volume and subvolume. The numbers are stored in a database >> such that the server will always use the same fsid. >> The entry in the exports file allowed to skip fsid= entiry but >> stating a UUID is allowed, if needed. >> >> This mode has the obvious downside that load balancing is not >> possible since multiple re-exporting NFS servers would generate >> different ids. > > This is the one I think it makes sense to concentrate on first. Ideally > it should Just Work without requiring any configuration. Agreed. > And then eventually my hope is that we could replace sqlite by a > distributed database to get filehandles that are consistent across > multiple servers. Sure. I see at least two options here: a. Allow multiple SQL backends in nfs-utils. SQLite by default, but also remote MariaDB or Postgres... b. Placing the SQLite database on a shared file system that is capable of file locks. That way we can use SQlite as-is. We just need to handle the SQLITE_LOCKED case in the code. Luckily writing happens seldom, so this shouldn't be a big deal. >> >> 2. predefined-fsidnum >> This mode works just like auto-fsidnum but does not generate ids >> for you. It helps in the load balancing case. A system administrator >> has to manually maintain the database and install it on all re-exporting >> NFS servers. If you have a massive amount of subvolumes this mode >> will help because you don't have to bloat the exports list. > > OK, I can see that being sort of useful but it'd be nice if we could > start with something more automatic. > >> 3. remote-devfsid >> If this mode is selected mountd/exportd will derive an UUID from the >> re-exported NFS volume's fsid (rfc7530 section-5.8.1.9). > > How does the server take a filehandle with a UUID in it and map that > UUID back to the original fsid? knfsd does not need the original fsid. All it sees is the UUID. If it needs to know which export belongs to a UUID it asks mountd. In mountd the regular UUID lookup is used then. >> No further local state is needed on the re-exporting server. >> The export list entry still needs a fsid= setting because while >> parsing the exports file the NFS mounts might be not there yet. > > I don't understand that bit. I tried to explain that with this mode we don't need to store UUID or fsids on disk. >> This mode is dangerous, use only of you're absolutely sure that the >> NFS server you're re-exporting has a stable fsid. Chances are good >> that it can change. > > The fsid should be stable. Didn't you explain me last time that it is not? By fsid I mean: https://datatracker.ietf.org/doc/html/rfc7530#section-5.8.1.9 https://datatracker.ietf.org/doc/html/rfc7530#section-2.2.5 So after a reboot the very same filesystem could be on different disks and the major/minor tuple is different. (If the server uses disk ids as is). > The case I'm worried about is the case where we're reexporting exports > from multiple servers. Then there's nothing preventing the two servers > from accidentally picking the same fsid to represent different exports. That's a good point. Since /proc/fs/nfsfs/volumes shows all that information we can add sanity checks to mountd. Thanks, //richard