Re: Options for improving f_fs.c performance?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Michal Nazarewicz <mina86@xxxxxxxxxx> writes:
>>> Does this sound like something that people want or find acceptable?
>>
>> yes, it _is_ definitely acceptable. But f_fs.c needs a cleanup
>> first. Also, IMHO a better flag to implement would be O_NONBLOCK. Here's
>> how f_fs.c works today:
>>
>> write(ep2, buf, length);
>>  ffs_epfile_write_iter()
>>   ffs_epfile_io()
>>    usb_ep_queue()
>>    wait_for_completion_interruptible()
>>
>> That wait_for_completion_interruptible() is what's killing
>> performance. Each and every read/write waits for the USB side to
>> complete. It would've been much better to have something like:
>>
>> if (flags & O_NONBLOCK)
>> 	wait_for_completion_interruptible()
>
> We cannot return to user space before the transfer is completed though.

why not? We already copy_from_user() to own kernel buffer.

>> This would help the write() side of things by a long margin. For reads,
>> what we could do is have a kernel ring buffer with pre-allocated and
>> pre-queued usb_requests pointing to different pages in this ring
>> buffer. When a read() comes, instead of queueing the request right
>> there, we check if there's data in the internal ring buffer, if there
>> is, we just copy_to_user(), otherwise we either return 0 or return
>> -EAGAIN (depending on O_NONBLOCK).
>>
>> After that, it becomes easy to implement poll() for endpoint files.
>>
>> I really think this would give us better result in the end because the
>> real problem is with that wait_for_completion_interruptible(). If you
>> put a sniffer on the traffic, I'm sure you're gonna see several SOFs
>> going by without data just because of the latency of completing the
>> current request and, finally, returning control to userspace and so on.
>
> If waiting is the problem then isn’t it solved by async IO?  With it
> user space can implement double (triple, whatever…) buffering and as
> soon as one request is completed, the next one becomes active.
>
> I prefer non-blocking to aio, but that’s a matter of preference and
> design of the application.
>
> The advantage of async IO is that user space has more control over what
> reads and writes happen.  f_fs doesn’t know the underlying protocol and
> I can imagine that in some cases that would matter.

USB is always first-come-first-served, right? When would this "control
over what happens" be useful?

-- 
balbi
--
To unsubscribe from this list: send the line "unsubscribe linux-usb" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Media]     [Linux Input]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Old Linux USB Devel Archive]

  Powered by Linux