Re: g_mass_storage not queueing requests

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Felipe Balbi <felipe.balbi@xxxxxxxxxxxxxxx> writes:
>>> get_next_command() still sees bh->state != BUF_STATE_FULL.
>>>
>>>> [   34.123286] ===> sleep_thread 634: caller fsg_main_thread+0xaa1/0xb00 [usb_f_mass_storage]: going to sleep
>>>> [   34.123289] ===> sleep_thread 634: caller fsg_main_thread+0xaa1/0xb00 [usb_f_mass_storage]: going to sleep
>>>
>>> so it goes to sleep and never wakes up again.
>>
>> one thing that caught my attention is that not all accesses to bh->state
>> are locked. Is that an oversight or is there a valid reason for that?
>>
>> Considering that accessing bh->state would always guarantee a full
>> barrier (at least on x86, per Peter Z), I'm wondering if we should make
>> every access to bh->state locked. Or, perhaps, they should all be
>> preceeded with a call to smp_mb() in order to guarantee that previous
>> writes to bh->state are seen by current CPU?
>>
>> I can test that out, but I'll wait for a proper patch from you as I
>> cannot predict all memory ordering problems that could arise from
>> current code. Still, f_mass_storage.c looks really odd :-s
>
> okay, I found the one place where changing smp_wmb() to smp_mb() solves
> the problem
>
> @@ -395,7 +395,7 @@ static int fsg_set_halt(struct fsg_dev *fsg, struct usb_ep *ep)
>  /* Caller must hold fsg->lock */
>  static void wakeup_thread(struct fsg_common *common)
>  {
> -	smp_wmb();	/* ensure the write of bh->state is complete */
> +	smp_mb();	/* ensure the write of bh->state is complete */
>  	/* Tell the main thread that something has happened */
>  	common->thread_wakeup_needed = 1;
>  	if (common->thread_task)
>
> Based on what Peter Z said on our other thread, this shouldn't be
> necessary for this particular occasion. bulk_out_complete() writes to
> bh->state inside a spin_lock() which is implemented with a full barrier
> in x86.
>
> So, how come we need another full barrier in wakeup_thread() ?
>
> Is is because the call to wakeup_process() is *also* inside the
> spinlocked region and the full barrier is added only at spin_unlock()
> time? (just wondering here)

okay, I'm thinking that's a plausible explanation, please correct me if
I'm wrong. What I'm speculating now is that inside a locked region,
reordering can happen (can it?) which means that our:

	spin_lock(&common->lock);
	bh->outreq_busy = 0;
	bh->state = BUF_STATE_FULL;
	wakeup_thread(common);
	spin_unlock(&common->lock);

ends up being reordered as:

	spin_lock(&common->lock);
	bh->outreq_busy = 0;
	wakeup_thread(common);
	bh->state = BUF_STATE_FULL;
	spin_unlock(&common->lock);

which triggers the fault. Is that possible?

Looking at the history.git tree [0] I found a little more of
file_storage gadget's history. Here are some of the details I could find
out:

When you first wrote the file_storage gadget, wakeup_thread() was called
outside the lock from bulk_{in,out}_complete() (see [1]). A follow-up
patch ended up moving wakeup_thread() inside the locked region (while
also removing a wait_queue()) which, I'm speculating, introduced the
race condition (see [2])

Another piece of speculation (which I can test, if necessary) is that
larger mass storage buffers (e.g. 64k or 128k) would hide this race
because it would take longer for UDC to complete the request. This might
be why we never saw this before with HS-only UDCs.

Are we really dealing with a regression introduced back in 2.6.15?

[0] https://git.kernel.org/cgit/linux/kernel/git/history/history.git/
[1] https://git.kernel.org/cgit/linux/kernel/git/history/history.git/commit/?id=23a070c9db2e444360f05cf69b33c8248b03bc06
[2] https://git.kernel.org/cgit/linux/kernel/git/history/history.git/commit/?id=a21d4fed4b00eaf7e7c3b2e2b25de24f540bfa66

-- 
balbi

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux Media]     [Linux Input]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Old Linux USB Devel Archive]

  Powered by Linux