On 2/28/20 5:25 PM, Douglas Gilbert wrote:
On 2020-02-28 3:31 a.m., Hannes Reinecke wrote:
On 2/27/20 5:58 PM, Douglas Gilbert wrote:
A reviewer suggested that the extra overhead associated with a
rw lock compared to a spinlock was not worth it for short,
oft-used critcal sections.
So the rwlock on the request list/array is changed to a spinlock.
The head of that list is in the owning sf file descriptor object.
Signed-off-by: Douglas Gilbert <dgilbert@xxxxxxxxxxxx>
---
drivers/scsi/sg.c | 52 +++++++++++++++++++++++------------------------
1 file changed, 26 insertions(+), 26 deletions(-)
diff --git a/drivers/scsi/sg.c b/drivers/scsi/sg.c
index eb3b333ea30b..99d7b1f76fc7 100644
--- a/drivers/scsi/sg.c
+++ b/drivers/scsi/sg.c
@@ -139,7 +139,7 @@ struct sg_fd { /* holds the state of a
file descriptor */
struct list_head sfd_entry; /* member sg_device::sfds list */
struct sg_device *parentdp; /* owning device */
wait_queue_head_t read_wait; /* queue read until command
done */
- rwlock_t rq_list_lock; /* protect access to list in req_arr */
+ spinlock_t rq_list_lock; /* protect access to list in req_arr */
struct mutex f_mutex; /* protect against changes in this fd */
int timeout; /* defaults to SG_DEFAULT_TIMEOUT */
int timeout_user; /* defaults to SG_DEFAULT_TIMEOUT_USER */
@@ -738,17 +738,17 @@ sg_get_rq_mark(struct sg_fd *sfp, int pack_id)
struct sg_request *resp;
unsigned long iflags;
- write_lock_irqsave(&sfp->rq_list_lock, iflags);
+ spin_lock_irqsave(&sfp->rq_list_lock, iflags);
list_for_each_entry(resp, &sfp->rq_list, entry) {
/* look for requests that are ready + not SG_IO owned */
if (resp->done == 1 && !resp->sg_io_owned &&
(-1 == pack_id || resp->header.pack_id == pack_id)) {
resp->done = 2; /* guard against other readers */
- write_unlock_irqrestore(&sfp->rq_list_lock, iflags);
+ spin_unlock_irqrestore(&sfp->rq_list_lock, iflags);
return resp;
}
}
- write_unlock_irqrestore(&sfp->rq_list_lock, iflags);
+ spin_unlock_irqrestore(&sfp->rq_list_lock, iflags);
return NULL;
}
@@ -802,9 +802,9 @@ srp_done(struct sg_fd *sfp, struct sg_request *srp)
unsigned long flags;
int ret;
- read_lock_irqsave(&sfp->rq_list_lock, flags);
+ spin_lock_irqsave(&sfp->rq_list_lock, flags);
ret = srp->done;
- read_unlock_irqrestore(&sfp->rq_list_lock, flags);
+ spin_unlock_irqrestore(&sfp->rq_list_lock, flags);
return ret;
}
This one is pretty much pointless.
By the time the 'return' statement is reached the 'ret' value is
already stale, and the lock has achieved precisely nothing.
Use READ_ONCE() and drop the lock.
Yes. And its gone in a later patch where it is now part of the atomic
state variable that is part of each sg_request object (i.e. rq_st).
Again, I'm restricting my changes to the title of the patch.
Yeah, I've seen in in the later patches.
Reviewed-by: Hannes Reinecke <hare@xxxxxxx>
Cheers,
Hannes
--
Dr. Hannes Reinecke Teamlead Storage & Networking
hare@xxxxxxx +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer