On Tue, Apr 18, 2023 at 5:12 PM Christian Brauner <brauner@xxxxxxxxxx> wrote: > > On Tue, Apr 18, 2023 at 04:56:40PM +0300, Amir Goldstein wrote: > > On Tue, Apr 18, 2023 at 4:33 PM Christian Brauner <brauner@xxxxxxxxxx> wrote: > > > > > > On Fri, Apr 14, 2023 at 09:29:01PM +0300, Amir Goldstein wrote: > > > > Jan, > > > > > > > > Followup on my quest to close the gap with inotify functionality, > > > > here is a proposal for FAN_UNMOUNT event. > > > > > > > > I have had many design questions about this: > > > > > > I'm going to humbly express what I feel makes sense to me when looking > > > at this from a user perspective: > > > > > > > 1) Should we also report FAN_UNMOUNT for marked inodes and sb > > > > on sb shutdown (same as IN_UNMOUNT)? > > > > > > My preference would be if this would be a separate event type. > > > FAN_SB_SHUTDOWN or something. > > > > If we implement an event for this at all, I would suggest FAN_IGNORED > > or FAN_EVICTED, which has the same meaning as IN_IGNORED. > > When you get an event that the watch went away, it could be because of: > > 1. watch removed by user > > 2. watch removed because inode was evicted (with FAN_MARK_EVICTABLE) > > 3. inode deleted > > 4. sb shutdown > > > > IN_IGNORED is generated in all of the above except for inode evict > > that is not possible with inotify. > > > > User can figure out on his own if the inode was deleted or if fs was unmounted, > > so there is not really a need for FAN_SB_SHUTDOWN IMO. > > Ok, sounds good. > > > > > Actually, I think that FAN_IGNORED would be quite useful for the > > FAN_MARK_EVICTABLE case, but it is a bit less trivial to implement > > than FAN_UNMOUNT was. > > > > > > > > > 2) Should we also report FAN_UNMOUNT on sb mark for any unmounts > > > > of that sb? > > > > > > I don't think so. It feels to me that if you watch an sb you don't > > > necessarily want to watch bind mounts of that sb. > > > > > > > 3) Should we report also the fid of the mount root? and if we do... > > > > 4) Should we report/consider FAN_ONDIR filter? > > > > > > > > All of the questions above I answered "not unless somebody requests" > > > > in this first RFC. > > > > > > Fwiw, I agree. > > > > > > > > > > > Specifically, I did get a request for an unmount event for containers > > > > use case. > > > > > > > > I have also had doubts regarding the info records. > > > > I decided that reporting fsid and mntid is minimum, but couldn't > > > > decide if they were better of in a single MNTID record or seprate > > > > records. > > > > > > > > I went with separate records, because: > > > > a) FAN_FS_ERROR has set a precendent of separate fid record with > > > > fsid and empty fid, so I followed this precendent > > > > b) MNTID record we may want to add later with FAN_REPORT_MNTID > > > > to all the path events, so better that it is independent > > > > > > > Just thought of another reason: > > c) FAN_UNMOUNT does not need to require FAN_REPORT_FID > > so it does not depend on filesystem having a valid f_fsid nor > > exports_ops. In case of "pseudo" fs, FAN_UNMOUNT can report > > only MNTID record (I will amend the patch with this minor change). > > I see some pseudo fses generate f_fsid, e.g., tmpfs in mm/shmem.c tmpfs is not "pseudo" in my eyes, because it implements a great deal of the vfs interfaces, including export_ops. and also I fixed its f_fsid recently: 59cda49ecf6c shmem: allow reporting fanotify events with file handles on tmpfs > At the risk of putting my foot in my mouth, what's stopping us from > making them all support f_fsid? Nothing much. Jan had the same opinion [1]. We could do either: 1. use uuid_to_fsid() in vfs_statfs() if fs has set s_uuid and not set f_fsid 2. use s_dev as f_fsid in vfs_statfs() if fs did not set f_fsid nor s_uuid 3. randomize s_uuid for simple fs (like tmpfs) 4. any combination of the above and more Note that we will also need to decide what to do with name_to_handle_at() for those pseudo fs. Quoting Jan from [1]: "But otherwise the proposal to make name_to_handle_at() work even for filesystems not exportable through NFS makes sense to me. But I guess we need some buy-in from VFS maintainers for this." (hint hint). Thanks, Amir. [1] https://lore.kernel.org/linux-fsdevel/20230417162721.ouzs33oh6mb7vtft@quack3/