[...]
I think this patch should fix it:
[PATCH] SQUASH: ensure we unset lock_snap_rwsem after unlocking it
Signed-off-by: Jeff Layton <jlayton@xxxxxxxxxx>
---
fs/ceph/inode.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c
index eebbd0296004..cb0ad0faee45 100644
--- a/fs/ceph/inode.c
+++ b/fs/ceph/inode.c
@@ -2635,8 +2635,10 @@ int __ceph_setattr(struct inode *inode, struct iattr *attr, struct ceph_iattr *c
release &= issued;
spin_unlock(&ci->i_ceph_lock);
- if (lock_snap_rwsem)
+ if (lock_snap_rwsem) {
up_read(&mdsc->snap_rwsem);
+ lock_snap_rwsem = false;
+ }
if (inode_dirty_flags)
__mark_inode_dirty(inode, inode_dirty_flags);
Testing with that patch on top of your latest series looks pretty good
so far.
Cool.
I see some xfstests failures that need to be investigated
(generic/075, in particular). I'll take a harder look at that next week.
I will also try this.
For now, I've gone ahead and updated wip-fscrypt-fnames to the latest
fnames branch, and also pushed a new wip-fscrypt-size branch that has
all of your patches, with the above SQUASH patch folded into #9.
I'll continue the testing next week, but I think the -size branch is
probably a good place to work from for now.
BTW, what's your test script for the xfstests ? I may miss some important.
I'm mainly running:
$ sudo ./check -g quick -E ./ceph.exclude
...and ceph.exclude has:
ceph/001
generic/003
generic/531
generic/538
...most of the exclusions are because they take a long time to run.
Oh and I should say...most of the failures I've seen with this patchset
are intermittent. I suspect there is some race condition we haven't
addressed yet.
Okay, my test was stuck and finally I found it just ran out of disks.
I have ran the truncate related tests all worked well till now.
I will try this more.
Thanks,
Thanks,