Re: [PATCH v6] block: loop: avoiding too many pending per work I/O

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, May 1, 2015 at 10:22 PM, Tejun Heo <tj@xxxxxxxxxx> wrote:
> On Fri, May 01, 2015 at 09:36:47PM +0800, Ming Lei wrote:
>> On Fri, May 1, 2015 at 6:17 PM, Christoph Hellwig <hch@xxxxxxxxxxxxx> wrote:
>> > On Fri, May 01, 2015 at 11:28:01AM +0800, Ming Lei wrote:
>> >> If there are too many pending per work I/O, too many
>> >> high priority work thread can be generated so that
>> >> system performance can be effected.
>
> Hmmm... why is it even marked HIGHPRI?  The commit doesn't seem to

I set it as HIGHPRI because the priority of previous loop thread is MIN_NICE,
but it may be doable to set it as normal priority, and I will test if
that may improve
the live booting situation.

> explain why.  Also, I wonder whether this would be better served by
> unbound workqueues.  These tasks are mostly like to walk all the way

>From my tests, looks bound is a bit better, but the difference is small.

> through the filesystem and block layer.  That can be quite a bit of
> processing for concurrency managed per-cpu workqueues and may
> effectively block out other work items which actually need to be
> HIGHPRI.
>
>> >> This patch limits the max pending per work I/O as 16,
>> >> and will fackback to single queue mode when the max
>> >> number is reached.
>> >
>> > Why would you do this fall back?  Shouldn't we just communicate
>> > a concurrency limit to the workqueue code?
>>
>> It can't work with workqueue's concurrency limit because the
>> queue is shared by all loop block devices, and the limit is on the
>> whole queue.
>
> Maybe just cap max_active to NR_OF_LOOP_DEVS * 16 or sth?  But idk,

It might not work because there are nested loop devices like fedora live CD, and
in theory the max_active should have been set as loop's queue depth *
nr_loop, otherwise there may be possibility of hanging.

So this patch is introduced.

> how many concurrent workers are we talking about and why are we
> capping per-queue concurrency from worker pool side instead of command
> tag side?

I think there should be performance advantage to make queue depth a bit more
because it can help to make queue pipeline as full. Also queue depth often
means how many requests the hardware can queue, and it is a bit different
with per-queue concurrency.

Thanks,
Ming Lei

>
> Thanks.
>
> --
> tejun
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line "unsubscribe stable" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Kernel]     [Kernel Development Newbies]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite Hiking]     [Linux Kernel]     [Linux SCSI]