Re: [PATCH v6] block: loop: avoiding too many pending per work I/O

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, May 3, 2015 at 9:52 AM, Tejun Heo <tj@xxxxxxxxxx> wrote:
> Hello,
>
> On Sat, May 02, 2015 at 10:56:20PM +0800, Ming Lei wrote:
>> > Maybe just cap max_active to NR_OF_LOOP_DEVS * 16 or sth?  But idk,
>>
>> It might not work because there are nested loop devices like fedora live CD, and
>> in theory the max_active should have been set as loop's queue depth *
>> nr_loop, otherwise there may be possibility of hanging.
>>
>> So this patch is introduced.
>
> If loop devices can be stacked, regardless of what you do with
> nr_active, it may deadlock.  There needs to be a rescuer per each
> nesting level (or just one per device).  This means that the current
> code is broken.

Yes.

>> > how many concurrent workers are we talking about and why are we
>> > capping per-queue concurrency from worker pool side instead of command
>> > tag side?
>>
>> I think there should be performance advantage to make queue depth a bit more
>> because it can help to make queue pipeline as full. Also queue depth often
>> means how many requests the hardware can queue, and it is a bit different
>> with per-queue concurrency.
>
> I'm not really following.  Can you please elaborate?

In case of loop-mq, a bigger queue_depth often has better performance
when doing read/write page cache in sequential read/write because they
are very quick and better to run them as a batch in one time work function,
but simply deceasing queue depth may hurt performance for this case.

Thanks,
Ming Lei
--
To unsubscribe from this list: send the line "unsubscribe stable" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Kernel]     [Kernel Development Newbies]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite Hiking]     [Linux Kernel]     [Linux SCSI]