Re: [PATCH 0/2] fuse: Rename DIRECT_IO_{RELAX -> ALLOW_MMAP}

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 12/7/23 08:39, Amir Goldstein wrote:
On Thu, Dec 7, 2023 at 1:28 AM Bernd Schubert
<bernd.schubert@xxxxxxxxxxx> wrote:



On 12/6/23 09:25, Amir Goldstein wrote:
Is it actually important for FUSE_DIRECT_IO_ALLOW_MMAP fs
(e.g. virtiofsd) to support FOPEN_PARALLEL_DIRECT_WRITES?
I guess not otherwise, the combination would have been tested.

I'm not sure how many people are aware of these different flags/features.
I had just finalized the backport of the related patches to RHEL8 on
Friday, as we (or our customers) need both for different jobs.


FOPEN_PARALLEL_DIRECT_WRITES is typically important for
network fs and FUSE_DIRECT_IO_ALLOW_MMAP is typically not
for network fs. Right?

We kind of have these use cases for our network file systems

FOPEN_PARALLEL_DIRECT_WRITES:
      - Traditional HPC, large files, parallel IO
      - Large file used on local node as container for many small files

FUSE_DIRECT_IO_ALLOW_MMAP:
      - compilation through gcc (not so important, just not nice when it
does not work)
      - rather recent: python libraries using mmap _reads_. As it is read
only no issue of consistency.


These jobs do not intermix - no issue as in generic/095. If such
applications really exist, I have no issue with a serialization penalty.
Just disabling FOPEN_PARALLEL_DIRECT_WRITES because other
nodes/applications need FUSE_DIRECT_IO_ALLOW_MMAP is not so nice.

Final goal is also to have FOPEN_PARALLEL_DIRECT_WRITES to work on plain
O_DIRECT and not only for FUSE_DIRECT_IO - I need to update this branch
and post the next version
https://github.com/bsbernd/linux/commits/fuse-dio-v4


In the mean time I have another idea how to solve
FOPEN_PARALLEL_DIRECT_WRITES + FUSE_DIRECT_IO_ALLOW_MMAP

Please find attached what I had in my mind. With that generic/095 is not
crashing for me anymore. I just finished the initial coding - it still
needs a bit cleanup and maybe a few comments.


Nice. I like the FUSE_I_CACHE_WRITES state.
For FUSE_PASSTHROUGH I will need to track if inode is open/mapped
in caching mode, so FUSE_I_CACHE_WRITES can be cleared on release
of the last open file of the inode.

I did not understand some of the complexity here:

         /* The inode ever got page writes and we do not know for sure
          * in the DIO path if these are pending - shared lock not possible */
         spin_lock(&fi->lock);
         if (!test_bit(FUSE_I_CACHE_WRITES, &fi->state)) {
                 if (!(*cnt_increased)) {

How can *cnt_increased be true here?

I think you missed the 2nd entry into this function, when the shared
lock was already taken?

Yeh, I did.

I have changed the code now to have all
complexity in this function (test, lock, retest with lock, release,
wakeup). I hope that will make it easier to see the intention of the
code. Will post the new patches in the morning.


Sounds good. Current version was a bit hard to follow.



                         fi->shared_lock_direct_io_ctr++;
                         *cnt_increased = true;
                 }
                 excl_lock = false;

Seems like in every outcome of this function
*cnt_increased = !excl_lock
so there is not need for out arg cnt_increased

If excl_lock would be used as input - yeah, would have worked as well.
Or a parameter like "retest-under-lock". Code is changed now to avoid
going in and out.


         }
         spin_unlock(&fi->lock);

out:
         if (excl_lock && *cnt_increased) {
                 bool wake = false;
                 spin_lock(&fi->lock);
                 if (--fi->shared_lock_direct_io_ctr == 0)
                         wake = true;
                 spin_unlock(&fi->lock);
                 if (wake)
                         wake_up(&fi->direct_io_waitq);
         }

I don't see how this wake_up code is reachable.

TBH, I don't fully understand the expected result.
Surely, the behavior of dio mixed with mmap is undefined. Right?
IIUC, your patch does not prevent dirtying page cache while dio is in
flight. It only prevents writeback while dio is in flight, which is the same
behavior as with exclusive inode lock. Right?

Yeah, thanks. I will add it in the patch description.

And there was actually an issue with the patch, as cache flushing needs
to be initiated before doing the lock decision, fixed now.


I thought there was, because of the wait in fuse_send_writepage()
but wasn't sure if I was following the flow correctly.


Maybe this interaction is spelled out somewhere else, but if not
better spell it out for people like me that are new to this code.

Sure, thanks a lot for your helpful comments!


Just to be clear, this patch looks like a good improvement and
is mostly independent of the "inode caching mode" and
FOPEN_CACHE_MMAP idea that I suggested.

The only thing that my idea changes is replacing the
FUSE_I_CACHE_WRITES state with a FUSE_I_CACHE_IO_MODE
state, which is set earlier than FUSE_I_CACHE_WRITES
on caching file open or first direct_io mmap and unlike
FUSE_I_CACHE_WRITES, it is cleared on the last file close.

FUSE_I_CACHE_WRITES means that caching writes happened.
FUSE_I_CACHE_IO_MODE means the caching writes and reads
may happen.

FOPEN_PARALLEL_DIRECT_WRITES obviously shouldn't care
about "caching reads may happen", but IMO that is a small trade off
to make for maintaining the same state for
"do not allow parallel dio" and "do not allow passthrough open".

I think the attached patches should do, it now also unsets FUSE_I_CACHE_IO_MODE. Setting the flag actually has to be done from fuse_file_mmap (and not from fuse_send_writepage) to avoid a dead stall, but that aligns with passthrough anyway? Amir, right now it only sets
FUSE_I_CACHE_IO_MODE for VM_MAYWRITE. Maybe you could add a condition
for passthrough there?

@Miklos, could please tell me how to move forward? I definitely need to rebase to fuse-next, but my question is if this patch here should replace Amirs fix (and get back ported) or if we should apply it on top of Amirs patch and so let that simple fix get back ported? Given this is all features and new flags - I'm all for for the simple fix.
If you agree on the general approach, I can put this on top of my dio
consolidate branch and rebase the rest of the patches on top of it. That part will get a bit more complicated, as we will also need to handle plain O_DIRECT.


Thanks,
Bernd
fuse: Create helper function if DIO write needs exclusive lock

From: Bernd Schubert <bschubert@xxxxxxx>

This is just a preparation for follow up patches.

Cc: Hao Xu <howeyxu@xxxxxxxxxxx>
Cc: Miklos Szeredi <miklos@xxxxxxxxxx>
Cc: Dharmendra Singh <dsingh@xxxxxxx>
Cc: Amir Goldstein <amir73il@xxxxxxxxx>
Signed-off-by: Bernd Schubert <bschubert@xxxxxxx>
Cc: stable@xxxxxxxxxxxxxxx
Preparation for Fixes: 153524053bbb ("fuse: allow non-extending parallel direct writes on the same file")
---
 fs/fuse/file.c |   57 +++++++++++++++++++++++++++++++++++++++-----------------
 1 file changed, 40 insertions(+), 17 deletions(-)

diff --git a/fs/fuse/file.c b/fs/fuse/file.c
index 1cdb6327511e..9cc7184241e5 100644
--- a/fs/fuse/file.c
+++ b/fs/fuse/file.c
@@ -1298,6 +1298,42 @@ static ssize_t fuse_perform_write(struct kiocb *iocb, struct iov_iter *ii)
 	return res;
 }
 
+static bool fuse_io_past_eof(struct kiocb *iocb, struct iov_iter *iter)
+{
+	struct inode *inode = file_inode(iocb->ki_filp);
+
+	return iocb->ki_pos + iov_iter_count(iter) > i_size_read(inode);
+}
+
+/*
+ * @return true if an exclusive lock for direct IO writes is needed
+ */
+static bool fuse_dio_wr_exclusive_lock(struct kiocb *iocb, struct iov_iter *from)
+{
+	struct file *file = iocb->ki_filp;
+	struct fuse_file *ff = file->private_data;
+	bool excl_lock = true;
+
+	/* server side has to advise that it supports parallel dio writes */
+	if (!(ff->open_flags & FOPEN_PARALLEL_DIRECT_WRITES))
+		goto out;
+
+	/* append will need to know the eventual eof - always needs an
+	 * exclusive lock
+	 */
+	if (iocb->ki_flags & IOCB_APPEND)
+		goto out;
+
+	/* parallel dio beyond eof is at least for now not supported */
+	if (fuse_io_past_eof(iocb, from))
+		goto out;
+
+	excl_lock = false;
+
+out:
+	return excl_lock;
+}
+
 static ssize_t fuse_cache_write_iter(struct kiocb *iocb, struct iov_iter *from)
 {
 	struct file *file = iocb->ki_filp;
@@ -1557,25 +1593,12 @@ static ssize_t fuse_direct_read_iter(struct kiocb *iocb, struct iov_iter *to)
 	return res;
 }
 
-static bool fuse_direct_write_extending_i_size(struct kiocb *iocb,
-					       struct iov_iter *iter)
-{
-	struct inode *inode = file_inode(iocb->ki_filp);
-
-	return iocb->ki_pos + iov_iter_count(iter) > i_size_read(inode);
-}
-
 static ssize_t fuse_direct_write_iter(struct kiocb *iocb, struct iov_iter *from)
 {
 	struct inode *inode = file_inode(iocb->ki_filp);
-	struct file *file = iocb->ki_filp;
-	struct fuse_file *ff = file->private_data;
 	struct fuse_io_priv io = FUSE_IO_PRIV_SYNC(iocb);
 	ssize_t res;
-	bool exclusive_lock =
-		!(ff->open_flags & FOPEN_PARALLEL_DIRECT_WRITES) ||
-		iocb->ki_flags & IOCB_APPEND ||
-		fuse_direct_write_extending_i_size(iocb, from);
+	bool exclusive_lock = fuse_dio_wr_exclusive_lock(iocb, from);
 
 	/*
 	 * Take exclusive lock if
@@ -1588,10 +1611,10 @@ static ssize_t fuse_direct_write_iter(struct kiocb *iocb, struct iov_iter *from)
 	else {
 		inode_lock_shared(inode);
 
-		/* A race with truncate might have come up as the decision for
-		 * the lock type was done without holding the lock, check again.
+		/*
+		 * Previous check was without any lock and might have raced.
 		 */
-		if (fuse_direct_write_extending_i_size(iocb, from)) {
+		if (fuse_dio_wr_exclusive_lock(iocb, from)) {
 			inode_unlock_shared(inode);
 			inode_lock(inode);
 			exclusive_lock = true;
fuse: Test for page cache writes in the shared lock DIO decision

From: Bernd Schubert <bschubert@xxxxxxx>

xfstest generic/095 triggers BUG_ON(fi->writectr < 0) in
fuse_set_nowrite().
This happens with a shared lock for FOPEN_DIRECT_IO and when in parallel
mmap writes happen (FUSE_DIRECT_IO_RELAX is set).
Reason is that multiple DIO writers see that the inode has pending
page IO writes and try to set FUSE_NOWRITE, but this code path requires
serialization. Ideal would be to let fuse_dio_wr_exclusive_lock detect if
there are outstanding writes, but that would require to hold an inode
lock in related page/folio write paths. Another solution would be to
disable the shared inode lock for FOPEN_DIRECT_IO, when
FUSE_DIRECT_IO_RELAX is set, but typically userspace/server side will set
these flags for all inodes (or not at all). With that FUSE_DIRECT_IO_RELAX
would entirely disable the shared lock and impose serialization even
though no page IO is ever done for inodes.  The solution here stores a
flag into the fuse inode when mmap is started. This flag is used to
to enforce the exclusive inode lock for FOPEN_DIRECT_IO.
Other than that, the patch does not help to improve consistensty for
concurrent page cache (so far only mmap) and direct IO file writes.

Cc: Hao Xu <howeyxu@xxxxxxxxxxx>
Cc: Miklos Szeredi <miklos@xxxxxxxxxx>
Cc: Dharmendra Singh <dsingh@xxxxxxx>
Cc: Amir Goldstein <amir73il@xxxxxxxxx>
Signed-off-by: Bernd Schubert <bschubert@xxxxxxx>
Cc: stable@xxxxxxxxxxxxxxx
Fixes: 153524053bbb ("fuse: allow non-extending parallel direct writes on the same file")
---
 fs/fuse/dir.c    |    1 
 fs/fuse/file.c   |  153 ++++++++++++++++++++++++++++++++++++++++++------------
 fs/fuse/fuse_i.h |   12 ++++
 fs/fuse/inode.c  |    1 
 4 files changed, 133 insertions(+), 34 deletions(-)

diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c
index d19cbf34c634..09aaaa31ae28 100644
--- a/fs/fuse/dir.c
+++ b/fs/fuse/dir.c
@@ -1751,6 +1751,7 @@ void fuse_set_nowrite(struct inode *inode)
 	struct fuse_inode *fi = get_fuse_inode(inode);
 
 	BUG_ON(!inode_is_locked(inode));
+	lockdep_assert_held_write(&inode->i_rwsem);
 
 	spin_lock(&fi->lock);
 	BUG_ON(fi->writectr < 0);
diff --git a/fs/fuse/file.c b/fs/fuse/file.c
index 9cc7184241e5..5d76ebd5419c 100644
--- a/fs/fuse/file.c
+++ b/fs/fuse/file.c
@@ -99,6 +99,16 @@ static void fuse_release_end(struct fuse_mount *fm, struct fuse_args *args,
 			     int error)
 {
 	struct fuse_release_args *ra = container_of(args, typeof(*ra), args);
+	struct fuse_inode *fi = get_fuse_inode(ra->inode);
+
+	spin_lock(&fi->lock);
+	if (--fi->open_ctr == 0) {
+		/* no open file left anymore, remove restrictions from
+		 * the cache bit
+		 */
+		clear_bit(FUSE_I_CACHE_IO_MODE, &fi->state);
+	}
+	spin_unlock(&fi->lock);
 
 	iput(ra->inode);
 	kfree(ra);
@@ -121,6 +131,7 @@ static void fuse_file_put(struct fuse_file *ff, bool sync, bool isdir)
 						   GFP_KERNEL | __GFP_NOFAIL))
 				fuse_release_end(ff->fm, args, -ENOTCONN);
 		}
+
 		kfree(ff);
 	}
 }
@@ -198,6 +209,7 @@ void fuse_finish_open(struct inode *inode, struct file *file)
 {
 	struct fuse_file *ff = file->private_data;
 	struct fuse_conn *fc = get_fuse_conn(inode);
+	struct fuse_inode *fi = get_fuse_inode(inode);
 
 	if (ff->open_flags & FOPEN_STREAM)
 		stream_open(inode, file);
@@ -205,8 +217,6 @@ void fuse_finish_open(struct inode *inode, struct file *file)
 		nonseekable_open(inode, file);
 
 	if (fc->atomic_o_trunc && (file->f_flags & O_TRUNC)) {
-		struct fuse_inode *fi = get_fuse_inode(inode);
-
 		spin_lock(&fi->lock);
 		fi->attr_version = atomic64_inc_return(&fc->attr_version);
 		i_size_write(inode, 0);
@@ -216,6 +226,10 @@ void fuse_finish_open(struct inode *inode, struct file *file)
 	}
 	if ((file->f_mode & FMODE_WRITE) && fc->writeback_cache)
 		fuse_link_write_file(file);
+
+	spin_lock(&fi->lock);
+	fi->open_ctr++;
+	spin_unlock(&fi->lock);
 }
 
 int fuse_open_common(struct inode *inode, struct file *file, bool isdir)
@@ -1306,13 +1320,19 @@ static bool fuse_io_past_eof(struct kiocb *iocb, struct iov_iter *iter)
 }
 
 /*
- * @return true if an exclusive lock for direct IO writes is needed
+ * @return true if an exclusive lock for direct IO writes is taken, false
+ *	   for the shared lock
  */
-static bool fuse_dio_wr_exclusive_lock(struct kiocb *iocb, struct iov_iter *from)
+bool fuse_dio_lock_inode(struct kiocb *iocb, struct iov_iter *from)
 {
 	struct file *file = iocb->ki_filp;
+	struct inode *inode = file_inode(iocb->ki_filp);
+	struct fuse_inode *fi = get_fuse_inode(inode);
 	struct fuse_file *ff = file->private_data;
+	struct fuse_conn *fc = ff->fm->fc;
 	bool excl_lock = true;
+	bool retest = false;
+	bool wake = false;
 
 	/* server side has to advise that it supports parallel dio writes */
 	if (!(ff->open_flags & FOPEN_PARALLEL_DIRECT_WRITES))
@@ -1324,13 +1344,67 @@ static bool fuse_dio_wr_exclusive_lock(struct kiocb *iocb, struct iov_iter *from
 	if (iocb->ki_flags & IOCB_APPEND)
 		goto out;
 
+retest_with_lock:
 	/* parallel dio beyond eof is at least for now not supported */
 	if (fuse_io_past_eof(iocb, from))
 		goto out;
 
-	excl_lock = false;
+	/* no need to optimize async requests */
+	if (!is_sync_kiocb(iocb) && iocb->ki_flags & IOCB_DIRECT &&
+	    fc->async_dio)
+		goto out;
+
+	/* If the inode ever got page writes, we do not know for sure
+	 * in the DIO path if these are pending - a shared lock is then
+	 * not possible
+	 */
+	spin_lock(&fi->lock);
+	if (test_bit(FUSE_I_CACHE_IO_MODE, &fi->state)) {
+		if (retest) {
+			excl_lock = true;
+			if (--fi->shared_lock_direct_io_ctr == 0)
+				wake = true;
+		}
+	} else {
+		if (!retest) {
+			excl_lock = false;
+			/* Increase the counter as soon as the decision for
+			 * shared locks was made to hold off page IO tasks
+			 */
+			if (!retest)
+				fi->shared_lock_direct_io_ctr++;
+		}
+	}
+	spin_unlock(&fi->lock);
 
 out:
+	if (retest) {
+		if (excl_lock) {
+			/* a race happened the lock type needs to change */
+			inode_unlock_shared(inode);
+
+			/* Increasing the shared_lock_direct_io_ctr counter
+			 *  might have hold off page cache tasks, wake these up.
+			 */
+			if (wake)
+				wake_up(&fi->direct_io_waitq);
+
+			inode_lock(inode);
+		}
+	} else {
+		if (excl_lock) {
+			inode_lock(inode);
+		} else {
+			inode_lock_shared(inode);
+
+			/* Need to retest after taken the shared lock, to see
+			 * if there are races
+			 */
+			retest = true;
+			goto retest_with_lock;
+		}
+	}
+
 	return excl_lock;
 }
 
@@ -1596,30 +1670,12 @@ static ssize_t fuse_direct_read_iter(struct kiocb *iocb, struct iov_iter *to)
 static ssize_t fuse_direct_write_iter(struct kiocb *iocb, struct iov_iter *from)
 {
 	struct inode *inode = file_inode(iocb->ki_filp);
+	struct fuse_inode *fi = get_fuse_inode(inode);
 	struct fuse_io_priv io = FUSE_IO_PRIV_SYNC(iocb);
 	ssize_t res;
-	bool exclusive_lock = fuse_dio_wr_exclusive_lock(iocb, from);
-
-	/*
-	 * Take exclusive lock if
-	 * - Parallel direct writes are disabled - a user space decision
-	 * - Parallel direct writes are enabled and i_size is being extended.
-	 *   This might not be needed at all, but needs further investigation.
-	 */
-	if (exclusive_lock)
-		inode_lock(inode);
-	else {
-		inode_lock_shared(inode);
 
-		/*
-		 * Previous check was without any lock and might have raced.
-		 */
-		if (fuse_dio_wr_exclusive_lock(iocb, from)) {
-			inode_unlock_shared(inode);
-			inode_lock(inode);
-			exclusive_lock = true;
-		}
-	}
+	/* take inode_lock or inode_lock_shared */
+	bool exclusive = fuse_dio_lock_inode(iocb, from);
 
 	res = generic_write_checks(iocb, from);
 	if (res > 0) {
@@ -1631,10 +1687,20 @@ static ssize_t fuse_direct_write_iter(struct kiocb *iocb, struct iov_iter *from)
 			fuse_write_update_attr(inode, iocb->ki_pos, res);
 		}
 	}
-	if (exclusive_lock)
+
+	if (exclusive)
 		inode_unlock(inode);
-	else
+	else {
+		bool wake = false;
+
 		inode_unlock_shared(inode);
+		spin_lock(&fi->lock);
+		if (--fi->shared_lock_direct_io_ctr == 0)
+			wake = true;
+		spin_unlock(&fi->lock);
+		if (wake)
+			wake_up(&fi->direct_io_waitq);
+	}
 
 	return res;
 }
@@ -2481,18 +2547,35 @@ static const struct vm_operations_struct fuse_file_vm_ops = {
 static int fuse_file_mmap(struct file *file, struct vm_area_struct *vma)
 {
 	struct fuse_file *ff = file->private_data;
+	struct inode *inode = file_inode(file);
+	struct fuse_inode *fi = get_fuse_inode(inode);
 	struct fuse_conn *fc = ff->fm->fc;
 
 	/* DAX mmap is superior to direct_io mmap */
-	if (FUSE_IS_DAX(file_inode(file)))
+	if (FUSE_IS_DAX(inode))
 		return fuse_dax_mmap(file, vma);
 
 	if (ff->open_flags & FOPEN_DIRECT_IO) {
-		/* Can't provide the coherency needed for MAP_SHARED
-		 * if FUSE_DIRECT_IO_RELAX isn't set.
-		 */
-		if ((vma->vm_flags & VM_MAYSHARE) && !fc->direct_io_relax)
-			return -ENODEV;
+		if (vma->vm_flags & VM_MAYSHARE) {
+			/* Can't provide the coherency needed for MAP_SHARED
+			 * if FUSE_DIRECT_IO_RELAX isn't set.
+			 */
+			if (!fc->direct_io_relax)
+				return -ENODEV;
+
+			if (vma->vm_flags & VM_MAYWRITE) {
+				if (!test_bit(FUSE_I_CACHE_IO_MODE, &fi->state))
+					set_bit(FUSE_I_CACHE_IO_MODE, &fi->state);
+
+				/* direct-io with shared locks cannot handle
+				 * page cache io - wait until it is done
+				 */
+				if (fi->shared_lock_direct_io_ctr != 0) {
+					wait_event(fi->direct_io_waitq,
+						   READ_ONCE(fi->shared_lock_direct_io_ctr) == 0);
+				}
+			}
+		}
 
 		invalidate_inode_pages2(file->f_mapping);
 
@@ -3265,7 +3348,9 @@ void fuse_init_file_inode(struct inode *inode, unsigned int flags)
 	INIT_LIST_HEAD(&fi->write_files);
 	INIT_LIST_HEAD(&fi->queued_writes);
 	fi->writectr = 0;
+	fi->shared_lock_direct_io_ctr = 0;
 	init_waitqueue_head(&fi->page_waitq);
+	init_waitqueue_head(&fi->direct_io_waitq);
 	fi->writepages = RB_ROOT;
 
 	if (IS_ENABLED(CONFIG_FUSE_DAX))
diff --git a/fs/fuse/fuse_i.h b/fs/fuse/fuse_i.h
index 6e6e721f421b..27750251d0e5 100644
--- a/fs/fuse/fuse_i.h
+++ b/fs/fuse/fuse_i.h
@@ -84,6 +84,9 @@ struct fuse_inode {
 	/* Which attributes are invalid */
 	u32 inval_mask;
 
+	/* number of opened files by this inode */
+	u32 open_ctr;
+
 	/** The sticky bit in inode->i_mode may have been removed, so
 	    preserve the original mode */
 	umode_t orig_i_mode;
@@ -110,11 +113,17 @@ struct fuse_inode {
 			 * (FUSE_NOWRITE) means more writes are blocked */
 			int writectr;
 
+			/* counter of tasks with shared lock direct-io writes */
+			int shared_lock_direct_io_ctr;
+
 			/* Waitq for writepage completion */
 			wait_queue_head_t page_waitq;
 
 			/* List of writepage requestst (pending or sent) */
 			struct rb_root writepages;
+
+			/* waitq for direct-io completion */
+			wait_queue_head_t direct_io_waitq;
 		};
 
 		/* readdir cache (directory only) */
@@ -172,6 +181,9 @@ enum {
 	FUSE_I_BAD,
 	/* Has btime */
 	FUSE_I_BTIME,
+	/* Has page cache IO */
+	FUSE_I_CACHE_IO_MODE,
+
 };
 
 struct fuse_conn;
diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c
index 74d4f09d5827..311d1ed73fb7 100644
--- a/fs/fuse/inode.c
+++ b/fs/fuse/inode.c
@@ -83,6 +83,7 @@ static struct inode *fuse_alloc_inode(struct super_block *sb)
 	fi->attr_version = 0;
 	fi->orig_ino = 0;
 	fi->state = 0;
+	fi->open_ctr = 0;
 	mutex_init(&fi->mutex);
 	spin_lock_init(&fi->lock);
 	fi->forget = fuse_alloc_forget();

[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux