On Thu, Sep 21, 2023 at 11:21 AM Miklos Szeredi <miklos@xxxxxxxxxx> wrote: > > On Thu, 21 Sept 2023 at 09:33, Amir Goldstein <amir73il@xxxxxxxxx> wrote: > > > In my example, the server happens to setup the mapping > > of the backing file to inode > > *before the first open for read on the inode* > > and to teardown the mapping > > *after the last close on the inode* (not even last rdonly file close) > > but this is an arbitrary implementation choice of the server. > > Okay. > > So my question becomes: is this flexibility really needed? > > I understand the need to set up the mapping on open. I think it also > makes sense to set up the mapping on lookup, since then OPEN/RELEASE > can be omitted. > > Removing the mapping at a random point int time might also make sense, > because of limitation on the number of open files, but that's > debatable. > > What I'm getting at is that I'd prefer the ioctl to just work one way: > register a file and return a ID. Then there would be ways to associate > that ID with an inode (in LOOKUP or OPEN) or with an open file (in > OPEN). With my current kernel implementation, this change implies changing the lifetime rules of fuse_backing object, so that last put will also remove the backing_id from idr. It complicates things a bit and is not needed IMO (see below). The thing that I am concerned about is the complexity of the AUTO_CLOSE semantics for per-inode and per-file. IOW, explaining who owns the backing_id becomes more complex to understand and communicate in a simple API. I was aiming for a simple to use API and I think my example demonstrates two modes that are simple to use and even the server managed backing_id mode would be simple to use. I don't mind dropping the "inode bound" patch altogether and staying with server managed backing_id without support for auto-close-on-evict and only support per-file-auto-close as is already implemented in my POC. IWO, if the server want to associate a backing file id with an inode on LOOKUP or on open, it has no problem is keeping this association in the server internally, replying to any open with the backing_id that it associated and closing the backing_id on FORGET or on the last close. I see no real gain in the kernel handling the inode-backing_id association for the server. At least not until this association is needed to passthough inode operations and in that case server would probably mapping an O_PATH fd to the inode. I can easily change my example to work like that and drop the "inode bound" backing file patch. In fact, it will make the example even more flexible, because the server can keep 2 files per inode, one rdonly and one rdwr (same as the kernel nfsd v3 open files cache does) and for the example, we could let any open request with non trivial flags (i.e. O_SYNC) use a per-file-auto-close backing file. > > Can you please also remind me why we need the per-open-file mapping > mode? I'm sure that we've discussed this, but my brain is like a > sieve... Different FUSE files may have different open flags (e.g. O_RDONLY/O_RDWR/O_SYNC) so server may want to use different backing files for different FUSE files on the same inode, but perhaps this is not what you were asking? Thanks, Amir.