Re: [PATCH 1/3] hpsa: remove unneeded loop

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Aug 01, 2013 at 04:59:45PM +0200, Tomas Henzl wrote:
> On 08/01/2013 04:21 PM, scameron@xxxxxxxxxxxxxxxxxx wrote:
> > On Thu, Aug 01, 2013 at 04:05:20PM +0200, Tomas Henzl wrote:
> >> On 08/01/2013 03:39 PM, scameron@xxxxxxxxxxxxxxxxxx wrote:
> >>> On Thu, Aug 01, 2013 at 03:11:22PM +0200, Tomas Henzl wrote:
> >>>> From: Tomas Henzl <thenzl@xxxxxxxxxx>
> >>>>
> >>>> The cmd_pool_bits is protected everywhere with a spinlock, 
> >>>> we don't need the test_and_set_bit, set_bit is enough and the loop
> >>>> can be removed too.
> >>>>
> >>>> Signed-off-by: Tomas Henzl <thenzl@xxxxxxxxxx>
> >>>> ---
> >>>>  drivers/scsi/hpsa.c | 15 ++++++---------
> >>>>  1 file changed, 6 insertions(+), 9 deletions(-)
> >>>>
> >>>> diff --git a/drivers/scsi/hpsa.c b/drivers/scsi/hpsa.c
> >>>> index 796482b..d7df01e 100644
> >>>> --- a/drivers/scsi/hpsa.c
> >>>> +++ b/drivers/scsi/hpsa.c
> >>>> @@ -2662,15 +2662,12 @@ static struct CommandList *cmd_alloc(struct ctlr_info *h)
> >>>>  	unsigned long flags;
> >>>>  
> >>>>  	spin_lock_irqsave(&h->lock, flags);
> >>>> -	do {
> >>>> -		i = find_first_zero_bit(h->cmd_pool_bits, h->nr_cmds);
> >>>> -		if (i == h->nr_cmds) {
> >>>> -			spin_unlock_irqrestore(&h->lock, flags);
> >>>> -			return NULL;
> >>>> -		}
> >>>> -	} while (test_and_set_bit
> >>>> -		 (i & (BITS_PER_LONG - 1),
> >>>> -		  h->cmd_pool_bits + (i / BITS_PER_LONG)) != 0);
> >>>> +	i = find_first_zero_bit(h->cmd_pool_bits, h->nr_cmds);
> >>>> +	if (i == h->nr_cmds) {
> >>>> +		spin_unlock_irqrestore(&h->lock, flags);
> >>>> +		return NULL;
> >>>> +	}
> >>>> +	set_bit(i & (BITS_PER_LONG - 1), h->cmd_pool_bits + (i / BITS_PER_LONG));
> >>>>  	h->nr_allocs++;
> >>>>  	spin_unlock_irqrestore(&h->lock, flags);
> >>>>  
> >>>> -- 
> >>>> 1.8.3.1
> >>>>
> >>> Would it be better instead to just not use the spinlock for protecting
> >>> cmd_pool_bits?  I have thought about doing this for awhile, but haven't
> >>> gotten around to it.
> >>>
> >>> I think the while loop is safe without the spin lock.  And then it is
> >>> not needed in cmd_free either.
> >> I was evaluating the same idea for a while too, a loop and inside just the test_and_set_bit,
> >> maybe even a stored value to start with a likely empty bit from last time to tune it a bit.
> >> But I know almost nothing about the use pattern, so I decided for the least invasive change
> >> to the existing code, to not make it worse.
> > Only reason I haven't done it is I'm loathe to make such a change to the main i/o
> > path without testing it like crazy before unleashing it, and it's never been a 
> > convenient time to slide such a change in around here and get proper testing
> > done (and there are other rather large changes brewing).
> >
> > However, we have been using a similar scheme with the SCSI over PCIe driver,
> > here: https://github.com/HPSmartStorage/scsi-over-pcie/blob/master/block/sop.c
> > in alloc_request() around line 1476 without problems, and nvme-core.c contains
> > similar code in alloc_cmdid(), so I am confident it's sound in principle.
> > I would want to beat on it though, in case it ends up exposing a firmware bug
> > or something (not that I think it will, but you never know.)
> 
> I think the code is sound, maybe it could hypothetically return -EBUSY, because
> the find_first_zero_bit is not atomic, but this possibility is so low that it doesn't matter.
> Btw. on line 1284 - isn't it similar to patch 2/3 ?

find_first_zero_bit is not atomic, but the test_and_set_bit, which is what
counts, is atomic.   That find_first_zero_bit is not atomic confused me about
this code for a long time, and is why the spin lock was there in the first
place.  But if there's a race on the find_first_zero_bit and it returns the
same bit to multiple concurrent threads, only one thread will win the
test_and_set_bit, and the other threads will go back around the loop to try
again, and get a different bit.

I don't think a thread can get stuck in there never winning until all the bits
are used up because there should always be enough bits for all the commands we
would ever try to send concurrently, so every thread that gets in there should
eventually get a bit.

Or, am I missing some subtlety?

> 
> Back to this patch - we can take it as it is, because of the spinlock it should be safe,
> or omit it, you can then post a spinlock-less patch. I'll let the decision on you.

I think I like the spin-lock-less variant better.  But I want to test it out
here for awhile first.

-- steve

> 
> tomash
> 
> 
> >
> > -- steve
> >
> >
> >  
> >>
> >>> -- steve
> >>>
> >>> --
> >>> To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
> >>> the body of a message to majordomo@xxxxxxxxxxxxxxx
> >>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]
  Powered by Linux