On Tue, Oct 24, 2023 at 9:01 PM Benjamin Coddington <bcodding@xxxxxxxxxx> wrote: > > On 24 Oct 2023, at 13:12, Amir Goldstein wrote: > > On Tue, Oct 24, 2023 at 6:32 PM Benjamin Coddington <bcodding@xxxxxxxxxx> wrote: > >> Yes, but if the specific export is on the same server's filesystem as the > >> "root", you'll still get zero. There are various ways to set fsid on > >> exports for linux servers, but the fsid will be the same for all exports of > >> the same filesystem on the server. > >> > > > > OK. good to know. I thought zero fsid was only for the root itself. > > Yes, but by "root" here I always mean the special NFSv4 root - the special > per-server global root filehandle. > > ... > > >> I'm not familiar with fanotify enough to know if having multiple fsid 0 > >> mounts of different filesystems on different servers will do the right > >> thing. I wanted to point out that very real possibility for v4. > >> > > > > The fact that fsid 0 would be very common for many nfs mounts > > makes this patch much less attractive. > > > > Because we only get events for local client changes, we do not > > have to tie the fsid with the server's fsid, we could just use a local > > volatile fsid, as we do in other non-blockdev fs (tmpfs, kernfs). > > A good way to do this would be to use the nfs_server->s_dev's major:minor - > this represents the results of nfs_compare_super(), so it should be the same > value if NFS is treating it as the same filesystem. > Yes, that would avoid local collisions and this is what we are going to do for most of the simple fs with anon_bdev [1]. But anon_bdev major is 0 and minor is quickly recyclable. fanotify identified objects by {f_fsid, f_handle} pair. Since nfs client encodes persistent file handles, I would like to try to hold its f_fsid to higher standards than those of the simple fs. You say that server->fsid.minor is always 0. Perhaps we should mix server->fsid.major with server->s_dev's minor? Thanks, Amir. [1] https://lore.kernel.org/linux-fsdevel/20231023143049.2944970-1-amir73il@xxxxxxxxx/