Re: [PATCH v3 1/1] virtio-blk: avoid preallocating big SGL for data

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 9/1/2021 6:27 PM, Jens Axboe wrote:
On 9/1/21 8:58 AM, Max Gurtovoy wrote:
On 9/1/2021 5:50 PM, Michael S. Tsirkin wrote:
On Wed, Sep 01, 2021 at 04:14:34PM +0300, Max Gurtovoy wrote:
No need to pre-allocate a big buffer for the IO SGL anymore. If a device
has lots of deep queues, preallocation for the sg list can consume
substantial amounts of memory. For HW virtio-blk device, nr_hw_queues
can be 64 or 128 and each queue's depth might be 128. This means the
resulting preallocation for the data SGLs is big.

Switch to runtime allocation for SGL for lists longer than 2 entries.
This is the approach used by NVMe drivers so it should be reasonable for
virtio block as well. Runtime SGL allocation has always been the case
for the legacy I/O path so this is nothing new.

The preallocated small SGL depends on SG_CHAIN so if the ARCH doesn't
support SG_CHAIN, use only runtime allocation for the SGL.

Re-organize the setup of the IO request to fit the new sg chain
mechanism.

No performance degradation was seen (fio libaio engine with 16 jobs and
128 iodepth):

IO size      IOPs Rand Read (before/after)         IOPs Rand Write (before/after)
--------     ---------------------------------    ----------------------------------
512B          318K/316K                                    329K/325K

4KB           323K/321K                                    353K/349K

16KB          199K/208K                                    250K/275K

128KB         36K/36.1K                                    39.2K/41.7K

Signed-off-by: Max Gurtovoy <mgurtovoy@xxxxxxxxxx>
Reviewed-by: Israel Rukshin <israelr@xxxxxxxxxx>
Could you use something to give confidence intervals maybe?
As it is it looks like a 1-2% regression for 512B and 4KB.
1%-2% is not a regression. It's a device/env/test variance.

This is just one test results. I run it many times and got difference by
+/- 2%-3% in each run for each sides.

Even if I run same driver without changes I get 2%-3% difference between
runs.

If you have a perf test suite for virtio-blk it will be great if you can
run it, or maybe Feng Li has.
You're adding an allocation to the hot path, and a free to the
completion hot path. It's not unreasonable to expect that there could be
performance implications associated with that. Which would be
particularly evident with 1 segment requests, as the results would seem
to indicate as well.

but for sg_nents <= 2 there is no dynamic allocation also in this patch exactly as we do in nvmf RDMA and FC for example.



Probably needs better testing. A profile of a peak run before and after
and a diff of the two might also be interesting.

I'll run ezfio test suite with stronger virtio-blk device that reach > 800KIOPs



The common idiom for situations like this is to have an inline part that
holds 1-2 segments, and then only punt to alloc if you need more than
that. As the number of segments grows, the cost per request matters
less.

isn't this the case here ? or am I missing something ?





[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux