Re: [Bug Report] Discard bios cannot be correctly merged in blk-mq

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> 2021年06月09日 08:41,Ming Lei <ming.lei@xxxxxxxxxx> 写道:
> 
> On Tue, Jun 08, 2021 at 11:49:04PM +0800, Wang Shanker wrote:
>> 
>> 
>> Actually, what are received by the nvme controller are discard requests
>> with 128 segments of 4k, instead of one segment of 512k.
> 
> Right, I am just wondering if this way makes a difference wrt. single
> range/segment discard request from device viewpoint, but anyway it is
> better to send less segment.
It would be meaningful if more than queue_max_discard_segments() bio's
are sent and merged into big segments. 
> 
>> 
>>> 
>>>> 
>>>> Similarly, the problem with scsi devices can be emulated using the following 
>>>> options for qemu:
>>>> 
>>>>       -device virtio-scsi,id=scsi \
>>>>       -device scsi-hd,drive=nvme1,bus=scsi.0,logical_block_size=4096,discard_granularity=2097152,physical_block_size=4096,serial=NVME1 \
>>>>       -device scsi-hd,drive=nvme2,bus=scsi.0,logical_block_size=4096,discard_granularity=2097152,physical_block_size=4096,serial=NVME2 \
>>>>       -device scsi-hd,drive=nvme3,bus=scsi.0,logical_block_size=4096,discard_granularity=2097152,physical_block_size=4096,serial=NVME3 \
>>>>       -trace scsi_disk_emulate_command_UNMAP,file=scsitrace.log
>>>> 
>>>> 
>>>> Despite the discovery, I cannot come up with a proper fix of this issue due
>>>> to my lack of familiarity of the block subsystem. I expect your kind feedback
>>>> on this. Thanks in advance.
>>> 
>>> In the above setting and raid456 test, I observe that rq->nr_phys_segments can
>>> reach 128, but queue_max_discard_segments() reports 256. So discard
>>> request size can be 512KB, which is the max size when you run 1MB discard on
>>> raid456. However, if the discard length on raid456 is increased, the
>>> current way will become inefficient.
>> 
>> Exactly. 
>> 
>> I suggest that bio's can be merged and be calculated as one segment if they are
>> contiguous and contain no data.
> 
> Fine.
> 
>> 
>> And I also discovered later that, even normal long write requests, e.g.
>> a 10m write, will be split into 4k bio's. The maximum number of bio's which can 
>> be merged into one request is limited by queue_max_segments, regardless
>> of whether those bio's are contiguous. In my test environment, for scsi devices,
>> queue_max_segments can be 254, which means about 1m size of requests. For nvme
>> devices(e.g. Intel DC P4610), queue_max_segments is only 33 since their mdts is 5,
>> which results in only 132k of requests. 
> 
> Here what matters is queue_max_discard_segments().
Here I was considering normal write/read bio's, since I first took it for granted
that normal write/read IOs would be optimal in raid456, and finally discovered
that those 4k IOs can only be merged into not-so-big requests.
> 
>> 
>> So, I would also suggest that raid456 should be improved to issue bigger bio's to
>> underlying drives.
> 
> Right, that should be root solution.
> 
> Cc Xiao, I remembered that he worked on this area.

Many thanks for looking into this issue.

Cheers,

Miao Wang



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux