On Wed, Nov 1, 2023 at 4:42 PM Miklos Szeredi <miklos@xxxxxxxxxx> wrote: > > On Wed, 1 Nov 2023 at 14:23, Amir Goldstein <amir73il@xxxxxxxxx> wrote: > > > However, I can imagine users wanting to do passthrough for read-only > > fd without doing passthrough for read-write fd, for example, if data is > > never manipulated by the server, but it is inspected on write. > > Okay, that could make sense in a read-mostly use case. But I think > that special case could be enabled with FOPEN_DIRECT_IO since we have > that anyway. I.e. FOPEN_DIRECT_IO would override the per-inode state, > which is what it does now. > > What doesn't make sense is to mix cached I/O with non-cached (direct > or passthrough) I/O. > I now realized that FOPEN_DIRECT_IO still allows page cache read and write-through via mmap. That hinders the idea that disallowing a mix of cached+passthrough opens on an inode is enough. My proposed solution is to change the semantics with the init flag FUSE_PASSTHROUGH to disallow mmap on FOPEN_DIRECT_IO files. So with a filesystem that supports FUSE_PASSTHROUGH, FOPEN_DIRECT_IO files can only be used for direct read/write io and mmap maps either the fuse inode pages or the backing inode pages. This also means that FUSE_DIRECT_IO_RELAX conflicts with FUSE_PASSTHROUGH. Should we also deny mixing FUSE_HAS_INODE_DAX with FUSE_PASSTHROUGH? Anything else from virtiofs? While I have your attention, I am trying to consolidate the validation of FOPEN_ mode vs inode state into fuse_finish_open(). What do you think about this partial patch that also moves fuse_finish_open() to after fuse_release_nowrite(). Is it legit? [patch to fuse_create_open() omitted for brevity] --- a/fs/fuse/file.c +++ b/fs/fuse/file.c @@ -195,8 +195,8 @@ static void fuse_link_write_file(struct file *file) spin_unlock(&fi->lock); } -void fuse_finish_open(struct inode *inode, struct file *file) +int fuse_finish_open(struct inode *inode, struct file *file) { struct fuse_file *ff = file->private_data; struct fuse_conn *fc = get_fuse_conn(inode); @@ -215,6 +215,9 @@ void fuse_finish_open(struct inode *inode, struct file *file, spin_unlock(&fi->lock); file_update_time(file); fuse_invalidate_attr_mask(inode, FUSE_STATX_MODSIZE); + truncate_pagecache(inode, 0); + } else if (!(ff->open_flags & FOPEN_KEEP_CACHE)) { + invalidate_inode_pages2(inode->i_mapping); } if ((file->f_mode & FMODE_WRITE) && fc->writeback_cache) fuse_link_write_file(file); } + + return 0; } @@ -253,18 +256,16 @@ int fuse_open_common(struct inode *inode, struct file *file, bool isdir) fuse_set_nowrite(inode); err = fuse_do_open(fm, get_node_id(inode), file, isdir); - if (!err) - fuse_finish_open(inode, file); if (is_wb_truncate || dax_truncate) fuse_release_nowrite(inode); + if (!err) { struct fuse_file *ff = file->private_data; - if (fc->atomic_o_trunc && (file->f_flags & O_TRUNC)) - truncate_pagecache(inode, 0); - else if (!(ff->open_flags & FOPEN_KEEP_CACHE)) - invalidate_inode_pages2(inode->i_mapping); + err = fuse_finish_open(inode, file); + if (err) + fuse_sync_release(get_fuse_inode(inode), ff, flags); } if (dax_truncate) filemap_invalidate_unlock(inode->i_mapping);