Re: [PATCH 2/2] virtio-blk: set NUMA affinity for a tagset

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 9/28/2021 7:27 PM, Leon Romanovsky wrote:
On Tue, Sep 28, 2021 at 06:59:15PM +0300, Max Gurtovoy wrote:
On 9/27/2021 9:23 PM, Leon Romanovsky wrote:
On Mon, Sep 27, 2021 at 08:25:09PM +0300, Max Gurtovoy wrote:
On 9/27/2021 2:34 PM, Leon Romanovsky wrote:
On Sun, Sep 26, 2021 at 05:55:18PM +0300, Max Gurtovoy wrote:
To optimize performance, set the affinity of the block device tagset
according to the virtio device affinity.

Signed-off-by: Max Gurtovoy <mgurtovoy@xxxxxxxxxx>
---
    drivers/block/virtio_blk.c | 2 +-
    1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 9b3bd083b411..1c68c3e0ebf9 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -774,7 +774,7 @@ static int virtblk_probe(struct virtio_device *vdev)
    	memset(&vblk->tag_set, 0, sizeof(vblk->tag_set));
    	vblk->tag_set.ops = &virtio_mq_ops;
    	vblk->tag_set.queue_depth = queue_depth;
-	vblk->tag_set.numa_node = NUMA_NO_NODE;
+	vblk->tag_set.numa_node = virtio_dev_to_node(vdev);
I afraid that by doing it, you will increase chances to see OOM, because
in NUMA_NO_NODE, MM will try allocate memory in whole system, while in
the latter mode only on specific NUMA which can be depleted.
This is a common methodology we use in the block layer and in NVMe subsystem
and we don't afraid of the OOM issue you raised.
There are many reasons for that, but we are talking about virtio here
and not about NVMe.
Ok. what reasons ?
For example, NVMe are physical devices that rely on DMA operations,
PCI connectivity e.t.c to operate. Such systems indeed can benefit from
NUMA locality hints. At the end, these devices are physically connected
to that NUMA node.

FYI Virtio devices are also physical devices that have PCI interface and rely on DMA operations.

from virtio spec: "Virtio devices use normal bus mechanisms of interrupts and DMA which should be familiar
to any device driver author".

Also we develop virtio HW at NVIDIA for blk and net devices with our SNAP technology.

These devices are connected via PCI bus to the host.

We also support SRIOV.

Same it true also for paravirt devices that are emulated by QEMU but still the guest sees them as PCI devices.


In our case, virtio-blk is a software interface that doesn't have all
these limitations. On the contrary, the virtio-blk can be created on one
CPU and moved later to be close to the QEMU which can run on another NUMA
node.

Not at all. virtio is HW interface.

I don't understand what are you saying here ?


Also this patch increases chances to get OOM by factor of NUMA nodes.

This is common practice in Linux for storage drivers. Why does it bothers you at all ?

I already decreased the memory footprint for virtio blk devices.


Before your patch, the virtio_blk can allocate from X memory, after your
patch it will be X/NUMB_NUMA_NODES.

So go ahead and change all the block layer if it bothers you so much.

Also please change the NVMe subsystem when you do it.

And lets see what the community will say.

In addition, it has all chances to even hurt performance.

So yes, post v2, but as Stefan and I asked, please provide supportive
performance results, because what was done for another subsystem doesn't
mean that it will be applicable here.

I will measure the perf but even if we wont see an improvement since it might not be the bottleneck, this changes should be merged since this is the way the block layer is optimized.

This is a micro optimization that commonly used also in other subsystem. And non of your above reasons (PCI, SW device, DMA) is true.

Virtio blk device is in 99% a PCI device (paravirt or real HW) exactly like any other PCI device you are familiar with.

It's connected physically to some slot, it has a BAR, MMIO, configuration space, etc..

Thanks.


Thanks



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux