Re: io_uring NAPI busy poll RCU is causing 50 context switches/second to my sqpoll thread

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, 2024-08-03 at 08:36 -0600, Jens Axboe wrote:
> 
> You can check the mappings in /sys/kernel/debug/block/<device>/
> 
> in there you'll find a number of hctxN folders, each of these is a
> hardware queue. hcxt0/type tells you what kind of queue it is, and
> inside the directory, you'll find which CPUs this queue is mapped to.
> Example:
> 
> root@r7625 /s/k/d/b/nvme0n1# cat hctx1/type 
> default
> 
> "default" means it's a read/write queue, so it'll handle both reads
> and
> writes.
> 
> root@r7625 /s/k/d/b/nvme0n1# ls hctx1/
> active  cpu11/   dispatch       sched_tags         tags
> busy    cpu266/  dispatch_busy  sched_tags_bitmap  tags_bitmap
> cpu10/  ctx_map  flags          state              type
> 
> and we can see this hardware queue is mapped to cpu 10/11/266.
> 
> That ties into how these are mapped. It's pretty simple - if a task
> is
> running on cpu 10/11/266 when it's queueing IO, then it'll use hw
> queue
> 1. This maps to the interrupts you found, but note that the admin
> queue
> (which is not listed these directories, as it's not an IO queue) is
> the
> first one there. hctx0 is nvme0q1 in your /proc/interrupts list.
> 
> If IO is queued on hctx1, then it should complete on the interrupt
> vector associated with nvme0q2.
> 
I have entered hacking territory but I did not find any other way to do
it...

drivers/nvme/host/pci.c
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 6cd9395ba9ec..70b7ca84ee21 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -2299,7 +2299,7 @@ static unsigned int nvme_max_io_queues(struct
nvme_dev *dev)
         */
        if (dev->ctrl.quirks & NVME_QUIRK_SHARED_TAGS)
                return 1;
-       return num_possible_cpus() + dev->nr_write_queues + dev-
>nr_poll_queues;
+       return 1 + dev->nr_write_queues + dev->nr_poll_queues;
 }
 
 static int nvme_setup_io_queues(struct nvme_dev *dev)

it works. I have no more IRQ on cpu1 as I wanted

 63:          9          0          0          0  PCI-MSIX-0000:00:04.0
0-edge      nvme0q0
 64:          0          0          0       7533  PCI-MSIX-0000:00:04.0
1-edge      nvme0q1

# ls /sys/kernel/debug/block/nvme0n1/hctx0/
active  busy  cpu0  cpu1  cpu2  cpu3  ctx_map  dispatch  dispatch_busy
flags  sched_tags  sched_tags_bitmap  state  tags  tags_bitmap  type






[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux