On 6/15/23 09:25, Yongji Xie wrote:
On Wed, Jun 14, 2023 at 7:52 PM Maxime Coquelin
<maxime.coquelin@xxxxxxxxxx> wrote:
vduse_vdpa_set_vq_affinity callback can be called
with NULL value as cpu_mask when deleting the vduse
device.
This patch clears virtqueue's IRQ affinity mask value
instead of dereferencing NULL cpu_mask.
[ 4760.952149] BUG: kernel NULL pointer dereference, address: 0000000000000000
[ 4760.959110] #PF: supervisor read access in kernel mode
[ 4760.964247] #PF: error_code(0x0000) - not-present page
[ 4760.969385] PGD 0 P4D 0
[ 4760.971927] Oops: 0000 [#1] PREEMPT SMP PTI
[ 4760.976112] CPU: 13 PID: 2346 Comm: vdpa Not tainted 6.4.0-rc6+ #4
[ 4760.982291] Hardware name: Dell Inc. PowerEdge R640/0W23H8, BIOS 2.8.1 06/26/2020
[ 4760.989769] RIP: 0010:memcpy_orig+0xc5/0x130
[ 4760.994049] Code: 16 f8 4c 89 07 4c 89 4f 08 4c 89 54 17 f0 4c 89 5c 17 f8 c3 cc cc cc cc 66 66 2e 0f 1f 84 00 00 00 00 00 66 90 83 fa 08 72 1b <4c> 8b 06 4c 8b 4c 16 f8 4c 89 07 4c 89 4c 17 f8 c3 cc cc cc cc 66
[ 4761.012793] RSP: 0018:ffffb1d565abb830 EFLAGS: 00010246
[ 4761.018020] RAX: ffff9f4bf6b27898 RBX: ffff9f4be23969c0 RCX: ffff9f4bcadf6400
[ 4761.025152] RDX: 0000000000000008 RSI: 0000000000000000 RDI: ffff9f4bf6b27898
[ 4761.032286] RBP: 0000000000000000 R08: 0000000000000008 R09: 0000000000000000
[ 4761.039416] R10: 0000000000000000 R11: 0000000000000600 R12: 0000000000000000
[ 4761.046549] R13: 0000000000000000 R14: 0000000000000080 R15: ffffb1d565abbb10
[ 4761.053680] FS: 00007f64c2ec2740(0000) GS:ffff9f635f980000(0000) knlGS:0000000000000000
[ 4761.061765] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 4761.067513] CR2: 0000000000000000 CR3: 0000001875270006 CR4: 00000000007706e0
[ 4761.074645] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 4761.081775] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 4761.088909] PKRU: 55555554
[ 4761.091620] Call Trace:
[ 4761.094074] <TASK>
[ 4761.096180] ? __die+0x1f/0x70
[ 4761.099238] ? page_fault_oops+0x171/0x4f0
[ 4761.103340] ? exc_page_fault+0x7b/0x180
[ 4761.107265] ? asm_exc_page_fault+0x22/0x30
[ 4761.111460] ? memcpy_orig+0xc5/0x130
[ 4761.115126] vduse_vdpa_set_vq_affinity+0x3e/0x50 [vduse]
[ 4761.120533] virtnet_clean_affinity.part.0+0x3d/0x90 [virtio_net]
[ 4761.126635] remove_vq_common+0x1a4/0x250 [virtio_net]
[ 4761.131781] virtnet_remove+0x5d/0x70 [virtio_net]
[ 4761.136580] virtio_dev_remove+0x3a/0x90
[ 4761.140509] device_release_driver_internal+0x19b/0x200
[ 4761.145742] bus_remove_device+0xc2/0x130
[ 4761.149755] device_del+0x158/0x3e0
[ 4761.153245] ? kernfs_find_ns+0x35/0xc0
[ 4761.157086] device_unregister+0x13/0x60
[ 4761.161010] unregister_virtio_device+0x11/0x20
[ 4761.165543] device_release_driver_internal+0x19b/0x200
[ 4761.170770] bus_remove_device+0xc2/0x130
[ 4761.174782] device_del+0x158/0x3e0
[ 4761.178276] ? __pfx_vdpa_name_match+0x10/0x10 [vdpa]
[ 4761.183336] device_unregister+0x13/0x60
[ 4761.187260] vdpa_nl_cmd_dev_del_set_doit+0x63/0xe0 [vdpa]
Fixes: 28f6288eb63d ("vduse: Support set_vq_affinity callback")
Cc: xieyongji@xxxxxxxxxxxxx
Signed-off-by: Maxime Coquelin <maxime.coquelin@xxxxxxxxxx>
---
drivers/vdpa/vdpa_user/vduse_dev.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vduse_dev.c
index 5f5c21674fdc..cdca94e85762 100644
--- a/drivers/vdpa/vdpa_user/vduse_dev.c
+++ b/drivers/vdpa/vdpa_user/vduse_dev.c
@@ -726,7 +726,11 @@ static int vduse_vdpa_set_vq_affinity(struct vdpa_device *vdpa, u16 idx,
{
struct vduse_dev *dev = vdpa_to_vduse(vdpa);
- cpumask_copy(&dev->vqs[idx]->irq_affinity, cpu_mask);
+ if (cpu_mask)
+ cpumask_copy(&dev->vqs[idx]->irq_affinity, cpu_mask);
+ else
+ cpumask_clear(&dev->vqs[idx]->irq_affinity);
I think we should set all the bits of irq affinity instead:
cpumask_setall(&dev->vqs[idx]->irq_affinity);
I hesitated between both.
My understanding is it only happens on removal, so either would work.
But in case it can happen on other cases, it is indeed better to use
cpumask_setall().
I will post a v2 today.
Thanks,
Maxime
Thanks,
Yongji
_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization