Re: [PATCH 00/26] Zone write plugging

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2/6/24 03:18, Bart Van Assche wrote:
> On 2/1/24 23:30, Damien Le Moal wrote:
>>   - Zone write plugging operates on BIOs instead of requests. Plugged
>>     BIOs waiting for execution thus do not hold scheduling tags and thus
>>     do not prevent other BIOs from being submitted to the device (reads
>>     or writes to other zones). Depending on the workload, this can
>>     significantly improve the device use and the performance.
> 
> Deep queues may introduce performance problems. In Android we had to
> restrict the number of pending writes to the device queue depth because
> otherwise read latency is too high (e.g. to start the camera app).

With zone write plugging, BIOS are delayed well above the scheduler and device.
BIOs that are plugged/delayed by ZWP do not hold tags, not even a scheduler tag,
so that allows reads (which are never plugged) to proceed. That is actually
unlike zone write locking which can hold on to all scheduler tags thus
preventing reads to proceed.

> I'm not convinced that queuing zoned write bios is a better approach than
> queuing zoned write requests.

Well, I do not see why not. The above point on its own is actually to me a good
argument enough. And various tests with btrfs showed that even with a slow HDD I
can see better overall thoughtput with ZWP compared to zone write locking.
And for fast sloid state zoned device (NVMe/UFS), you do not even need an IO
scheduler anymore.

> 
> Are there numbers available about the performance differences (bandwidth
> and latency) between plugging zoned write bios and zoned write plugging
> requests?

Finish reading the cover letter. It has lots of measurements with rc2, Jens
block/for-next and ZWP...

I actually reran all these perf tests over the weekend, but this time did 10
runs and took the average for comparison. Overall, I confirmed the results
showed in the cover letter: performance is generally on-par with ZWP or better,
but there is one exception: small sequential writes at high qd. There seem to be
an issue with regular plugging (current->plug) which result in lost merging
opportunists, causing the performance regression. I am digging into that to
understand what is happening.

-- 
Damien Le Moal
Western Digital Research





[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux