Re: [PATCH 13/18] scsi: target: Fix multiple LUN_RESET handling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 3/15/23 2:11 PM, Dmitry Bogdanov wrote:
> On Wed, Mar 15, 2023 at 11:44:48AM -0500, Mike Christie wrote:
>>
>> On 3/15/23 11:13 AM, Dmitry Bogdanov wrote:
>>> On Thu, Mar 09, 2023 at 04:33:07PM -0600, Mike Christie wrote:
>>>>
>>>> This fixes a bug where an initiator thinks a LUN_RESET has cleaned
>>>> up running commands when it hasn't. The bug was added in:
>>>>
>>>> commit 51ec502a3266 ("target: Delete tmr from list before processing")
>>>>
>>>> The problem occurs when:
>>>>
>>>> 1. We have N IO cmds running in the target layer spread over 2 sessions.
>>>> 2. The initiator sends a LUN_RESET for each session.
>>>> 3. session1's LUN_RESET loops over all the running commands from both
>>>> sessions and moves them to its local drain_task_list.
>>>> 4. session2's LUN_RESET does not see the LUN_RESET from session1 because
>>>> the commit above has it remove itself. session2 also does not see any
>>>> commands since the other reset moved them off the state lists.
>>>> 5. sessions2's LUN_RESET will then complete with a successful response.
>>>> 6. sessions2's inititor believes the running commands on its session are
>>>> now cleaned up due to the successful response and cleans up the running
>>>> commands from its side. It then restarts them.
>>>> 7. The commands do eventually complete on the backend and the target
>>>> starts to return aborted task statuses for them. The initiator will
>>>> either throw a invalid ITT error or might accidentally lookup a new task
>>>> if the ITT has been reallocated already.
>>>>
>>>> This fixes the bug by reverting the patch, and also serializes the
>>>> execution of LUN_RESETs and Preempt and Aborts. The latter is necessary
>>>> because it turns out the commit accidentally fixed a bug where if there
>>>> are 2 LUN RESETs executing they can see each other on the dev_tmr_list,
>>>> put the other one on their local drain list, then end up waiting on each
>>>> other resulting in a deadlock.
>>>
>>> If LUN_RESET is not in TMR list anymore there is no need to serialize
>>> core_tmr_drain_tmr_list.
>>
>> Ah shoot yeah I miswrote that. I meant I needed the serialization for my
>> bug not yours.
> 
> I still did not get why you wrapping core_tmr_drain_*_list by mutex.
> general_tmr_list have only aborts now and they do not wait for other aborts.

Do you mean I don't need the mutex for the bug I originally hit that's described
at the beginning? If your saying I don't need it for 2 resets running at the same
time, I agree. I thought I needed it if we have a RESET and Preempt and Abort:

1. You have 2 sessions. There are no TMRs initially.
2. session1 gets Preempt and Abort. It calls core_tmr_drain_state_list
and takes all the cmds from both sessions and puts them on the local
drain_task_list list.
3. session1 or 2 gets a LUN_RESET, it sees no cmds on the device's
state_lists, and returns success.
4. The initiator thinks the commands were cleaned up by the LUN_RESET.

- It could end up re-using the ITT while the original task being cleaned up is
still running. Then depending on which session got what and if TAS was set, if
the original command completes first then the initiator would think the second
command failed with SAM_STAT_TASK_ABORTED.

- If there was no TAS or the RESET and Preempt and Abort were on the same session
then when we could still hit a bug. We get the RESET response, the initiator might
retry the cmds or fail and the app might retry. The retry might go down a completely
different path on the target (like if hw queue1 was blocked and had the original
command, but this retry goes down hw queue2 due to being received on a different
CPU, so it completes right away). We do some new IO. Then hw queue1 unblocks and
overwrites the new IO.

With the mutex, the LUN_RESET will wait for the Preempt and Abort
which is waiting on the running commands. I could have had Preempt
and Abort create a tmr, and queue a work and go through that path
but I thought it looked uglier faking it.


> 
>>
>>>>
>>>>         if (cmd->transport_state & CMD_T_ABORTED)
>>>> @@ -3596,6 +3597,22 @@ static void target_tmr_work(struct work_struct *work)
>>>>                         target_dev_ua_allocate(dev, 0x29,
>>>>                                                ASCQ_29H_BUS_DEVICE_RESET_FUNCTION_OCCURRED);
>>>>                 }
>>>> +
>>>> +               /*
>>>> +                * If this is the last reset the device can be freed after we
>>>> +                * run transport_cmd_check_stop_to_fabric. Figure out if there
>>>> +                * are other resets that need to be scheduled while we know we
>>>> +                * have a refcount on the device.
>>>> +                */
>>>> +               spin_lock_irq(&dev->se_tmr_lock);
>>>
>>> tmr->tmr_list is removed from the list in the very end of se_cmd lifecycle
>>> so any number of LUN_RESETs can be in lun_reset_tmr_list. And all of them
>>> can be finished but not yet removed from the list.
>>
>> Don't we remove it from the list a little later in this function when
>> we call transport_lun_remove_cmd?
> 
> OMG, yes, of course, you a right. I messed up something.
> 
> But I have concerns still:
> transport_lookup_tmr_lun (where LUN_RESET is added to the list) and
> transport_generic_handle_tmr(where LUN_RESET is scheduled to handle)
> are not serialized. And below you can start the second LUN_RESET while
> transport_generic_handle_tmr is not yet called for it. The _handle_tmr
> could be delayed for a such time that is enough to handle that second
> LUN_RESET and to clear the flag. _handle_tmr will then start the work
> again.

Ah yeah, nice catch.

> 
> Is it worth doing that list management? Is it not enough just wrap
> calling core_tmr_lun_reset() in target_tmr_work by a mutex?

I can just do the mutex.

Was trying to reduce how many threads we use, but the big win is for aborts.
Will work on that type of thing in a separate patchset.


> Better to have a separarte variable used only under lock.
>
Will fix.




[Index of Archives]     [Linux SCSI]     [Kernel Newbies]     [Linux SCSI Target Infrastructure]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Device Mapper]

  Powered by Linux