Re: [PATCH v4] ceph: fix NULL pointer dereference for req->r_session

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 10/11/2022 19:49, Ilya Dryomov wrote:
On Thu, Nov 10, 2022 at 3:08 AM <xiubli@xxxxxxxxxx> wrote:
From: Xiubo Li <xiubli@xxxxxxxxxx>

The request's r_session maybe changed when it was forwarded or
resent.

Cc: stable@xxxxxxxxxxxxxxx
URL: https://bugzilla.redhat.com/show_bug.cgi?id=2137955
Signed-off-by: Xiubo Li <xiubli@xxxxxxxxxx>
---

Changed in V4:
- move mdsc->mutex acquisition and max_sessions assignment into "if (req1 || req2)" branch

  fs/ceph/caps.c | 54 +++++++++++++++-----------------------------------
  1 file changed, 16 insertions(+), 38 deletions(-)

diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
index 894adfb4a092..1c84be839087 100644
--- a/fs/ceph/caps.c
+++ b/fs/ceph/caps.c
@@ -2297,7 +2297,6 @@ static int flush_mdlog_and_wait_inode_unsafe_requests(struct inode *inode)
         struct ceph_mds_client *mdsc = ceph_sb_to_client(inode->i_sb)->mdsc;
         struct ceph_inode_info *ci = ceph_inode(inode);
         struct ceph_mds_request *req1 = NULL, *req2 = NULL;
-       unsigned int max_sessions;
         int ret, err = 0;

         spin_lock(&ci->i_unsafe_lock);
@@ -2315,28 +2314,24 @@ static int flush_mdlog_and_wait_inode_unsafe_requests(struct inode *inode)
         }
         spin_unlock(&ci->i_unsafe_lock);

-       /*
-        * The mdsc->max_sessions is unlikely to be changed
-        * mostly, here we will retry it by reallocating the
-        * sessions array memory to get rid of the mdsc->mutex
-        * lock.
-        */
-retry:
-       max_sessions = mdsc->max_sessions;
-
         /*
          * Trigger to flush the journal logs in all the relevant MDSes
          * manually, or in the worst case we must wait at most 5 seconds
          * to wait the journal logs to be flushed by the MDSes periodically.
          */
-       if ((req1 || req2) && likely(max_sessions)) {
-               struct ceph_mds_session **sessions = NULL;
-               struct ceph_mds_session *s;
+       if (req1 || req2) {
                 struct ceph_mds_request *req;
+               struct ceph_mds_session **sessions;
+               struct ceph_mds_session *s;
+               unsigned int max_sessions;
                 int i;

+               mutex_lock(&mdsc->mutex);
+               max_sessions = mdsc->max_sessions;
+
                 sessions = kcalloc(max_sessions, sizeof(s), GFP_KERNEL);
                 if (!sessions) {
+                       mutex_unlock(&mdsc->mutex);
                         err = -ENOMEM;
                         goto out;
                 }
@@ -2346,18 +2341,8 @@ static int flush_mdlog_and_wait_inode_unsafe_requests(struct inode *inode)
                         list_for_each_entry(req, &ci->i_unsafe_dirops,
                                             r_unsafe_dir_item) {
                                 s = req->r_session;
-                               if (!s)
+                               if (!s || unlikely(s->s_mds >= max_sessions))
Hi Xiubo,

I would be fine with this patch as is but I'm wondering if it can be
simplified further.  Now that mdsc->mutex is held while sessions array
is populated, is checking s->s_mds against max_sessions actually
needed?  Is it possible for some req->r_session on one of the unsafe
lists to have an "out of bounds" s_mds under mdsc->mutex?

Yeah, this can be simplified.

Let me do that.

Thanks!

- Xiubo



Thanks,

                 Ilya





[Index of Archives]     [CEPH Users]     [Ceph Large]     [Ceph Dev]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux