Re: [PATCH] ceph: don't leak snap_rwsem in handle_cap_grant

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 6/6/22 6:26 PM, Jeff Layton wrote:
On Mon, 2022-06-06 at 13:17 +0800, Xiubo Li wrote:
On 6/4/22 4:39 AM, Jeff Layton wrote:
When handle_cap_grant is called on an IMPORT op, then the snap_rwsem is
held and the function is expected to release it before returning. It
currently fails to do that in all cases which could lead to a deadlock.

URL: https://tracker.ceph.com/issues/55857
Signed-off-by: Jeff Layton <jlayton@xxxxxxxxxx>
---
   fs/ceph/caps.c | 27 +++++++++++++--------------
   1 file changed, 13 insertions(+), 14 deletions(-)

diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
index 258093e9074d..0a48bf829671 100644
--- a/fs/ceph/caps.c
+++ b/fs/ceph/caps.c
@@ -3579,24 +3579,23 @@ static void handle_cap_grant(struct inode *inode,
   			fill_inline = true;
   	}
- if (ci->i_auth_cap == cap &&
-	    le32_to_cpu(grant->op) == CEPH_CAP_OP_IMPORT) {
-		if (newcaps & ~extra_info->issued)
-			wake = true;
+	if (le32_to_cpu(grant->op) == CEPH_CAP_OP_IMPORT) {
+		if (ci->i_auth_cap == cap) {
+			if (newcaps & ~extra_info->issued)
+				wake = true;
+
+			if (ci->i_requested_max_size > max_size ||
+			    !(le32_to_cpu(grant->wanted) & CEPH_CAP_ANY_FILE_WR)) {
+				/* re-request max_size if necessary */
+				ci->i_requested_max_size = 0;
+				wake = true;
+			}
- if (ci->i_requested_max_size > max_size ||
-		    !(le32_to_cpu(grant->wanted) & CEPH_CAP_ANY_FILE_WR)) {
-			/* re-request max_size if necessary */
-			ci->i_requested_max_size = 0;
-			wake = true;
+			ceph_kick_flushing_inode_caps(session, ci);
   		}
-
-		ceph_kick_flushing_inode_caps(session, ci);
-		spin_unlock(&ci->i_ceph_lock);
   		up_read(&session->s_mdsc->snap_rwsem);
-	} else {
-		spin_unlock(&ci->i_ceph_lock);
   	}
+	spin_unlock(&ci->i_ceph_lock);
if (fill_inline)
   		ceph_fill_inline_data(inode, NULL, extra_info->inline_data,
I am not sure what the following code in Line#4139 wants to do, more
detail please see [1].

4131         if (msg_version >= 3) {
4132                 if (op == CEPH_CAP_OP_IMPORT) {
4133                         if (p + sizeof(*peer) > end)
4134                                 goto bad;
4135                         peer = p;
4136                         p += sizeof(*peer);
4137                 } else if (op == CEPH_CAP_OP_EXPORT) {
4138                         /* recorded in unused fields */
4139                         peer = (void *)&h->size;
4140 }
4141         }

For the export case the members in the 'peer' are always assigned to
'random' values. And seems could cause this issue.

[1] https://github.com/ceph/ceph-client/blob/for-linus/fs/ceph/caps.c#L4134

I am still reading the code. Any idea ?


I'm not sure that this is the case. It looks like most of the places in
the mds code that make a CEPH_CAP_OP_EXPORT message call set_cap_peer
just afterward. Why do you think this is given random values?

Misread the ceph message protocol related code. Please ignore this.

-- Xiubo





[Index of Archives]     [CEPH Users]     [Ceph Large]     [Ceph Dev]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux