Re: [PATCH v7 11/38] sg: change rwlock to spinlock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2020-02-28 3:31 a.m., Hannes Reinecke wrote:
On 2/27/20 5:58 PM, Douglas Gilbert wrote:
A reviewer suggested that the extra overhead associated with a
rw lock compared to a spinlock was not worth it for short,
oft-used critcal sections.

So the rwlock on the request list/array is changed to a spinlock.
The head of that list is in the owning sf file descriptor object.

Signed-off-by: Douglas Gilbert <dgilbert@xxxxxxxxxxxx>
---
  drivers/scsi/sg.c | 52 +++++++++++++++++++++++------------------------
  1 file changed, 26 insertions(+), 26 deletions(-)

diff --git a/drivers/scsi/sg.c b/drivers/scsi/sg.c
index eb3b333ea30b..99d7b1f76fc7 100644
--- a/drivers/scsi/sg.c
+++ b/drivers/scsi/sg.c
@@ -139,7 +139,7 @@ struct sg_fd {        /* holds the state of a file descriptor */
      struct list_head sfd_entry;    /* member sg_device::sfds list */
      struct sg_device *parentdp;    /* owning device */
      wait_queue_head_t read_wait;    /* queue read until command done */
-    rwlock_t rq_list_lock;    /* protect access to list in req_arr */
+    spinlock_t rq_list_lock;    /* protect access to list in req_arr */
      struct mutex f_mutex;    /* protect against changes in this fd */
      int timeout;        /* defaults to SG_DEFAULT_TIMEOUT      */
      int timeout_user;    /* defaults to SG_DEFAULT_TIMEOUT_USER */
@@ -738,17 +738,17 @@ sg_get_rq_mark(struct sg_fd *sfp, int pack_id)
      struct sg_request *resp;
      unsigned long iflags;
-    write_lock_irqsave(&sfp->rq_list_lock, iflags);
+    spin_lock_irqsave(&sfp->rq_list_lock, iflags);
      list_for_each_entry(resp, &sfp->rq_list, entry) {
          /* look for requests that are ready + not SG_IO owned */
          if (resp->done == 1 && !resp->sg_io_owned &&
              (-1 == pack_id || resp->header.pack_id == pack_id)) {
              resp->done = 2;    /* guard against other readers */
-            write_unlock_irqrestore(&sfp->rq_list_lock, iflags);
+            spin_unlock_irqrestore(&sfp->rq_list_lock, iflags);
              return resp;
          }
      }
-    write_unlock_irqrestore(&sfp->rq_list_lock, iflags);
+    spin_unlock_irqrestore(&sfp->rq_list_lock, iflags);
      return NULL;
  }
@@ -802,9 +802,9 @@ srp_done(struct sg_fd *sfp, struct sg_request *srp)
      unsigned long flags;
      int ret;
-    read_lock_irqsave(&sfp->rq_list_lock, flags);
+    spin_lock_irqsave(&sfp->rq_list_lock, flags);
      ret = srp->done;
-    read_unlock_irqrestore(&sfp->rq_list_lock, flags);
+    spin_unlock_irqrestore(&sfp->rq_list_lock, flags);
      return ret;
  }
This one is pretty much pointless.
By the time the 'return' statement is reached the 'ret' value is already stale, and the lock has achieved precisely nothing.
Use READ_ONCE() and drop the lock.

Yes. And its gone in a later patch where it is now part of the atomic
state variable that is part of each sg_request object (i.e. rq_st).
Again, I'm restricting my changes to the title of the patch.

Doug Gilbert




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux