On Tue, 2022-03-15 at 08:26 -0400, Jeff Layton wrote: > On Mon, 2022-03-14 at 21:07 +0100, Niels Dossche wrote: > > ceph_add_cap says in its function documentation that the caller should > > hold the read lock on the session snap_rwsem. Furthermore, not only > > ceph_add_cap needs that lock, when it calls to ceph_lookup_snap_realm it > > eventually calls ceph_get_snap_realm which states via lockdep that > > snap_rwsem needs to be held. handle_cap_export calls ceph_add_cap > > without that mdsc->snap_rwsem held. Thus, since ceph_get_snap_realm > > and ceph_add_cap both need the lock, the common place to acquire that > > lock is inside handle_cap_export. > > > > Signed-off-by: Niels Dossche <dossche.niels@xxxxxxxxx> > > --- > > fs/ceph/caps.c | 2 ++ > > 1 file changed, 2 insertions(+) > > > > diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c > > index b472cd066d1c..0dd60db285b1 100644 > > --- a/fs/ceph/caps.c > > +++ b/fs/ceph/caps.c > > @@ -3903,8 +3903,10 @@ static void handle_cap_export(struct inode *inode, struct ceph_mds_caps *ex, > > /* add placeholder for the export tagert */ > > int flag = (cap == ci->i_auth_cap) ? CEPH_CAP_FLAG_AUTH : 0; > > tcap = new_cap; > > + down_read(&mdsc->snap_rwsem); > > ceph_add_cap(inode, tsession, t_cap_id, issued, 0, > > t_seq - 1, t_mseq, (u64)-1, flag, &new_cap); > > + up_read(&mdsc->snap_rwsem); > > > > if (!list_empty(&ci->i_cap_flush_list) && > > ci->i_auth_cap == tcap) { > > Looks good. The other ceph_add_cap callsites already hold this. > > Merged into ceph testing branch. > Oops, spoke too soon. This patch calls down_read (a potentially sleeping function) while holding the i_ceph_lock spinlock. I think you'll need to take the rwsem earlier in the function, before taking the spinlock. Dropped from testing branch for now... -- Jeff Layton <jlayton@xxxxxxxxxx>