Re: io_uring NAPI busy poll RCU is causing 50 context switches/second to my sqpoll thread

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, 2024-08-03 at 08:36 -0600, Jens Axboe wrote:
> You can check the mappings in /sys/kernel/debug/block/<device>/
> 
> in there you'll find a number of hctxN folders, each of these is a
> hardware queue. hcxt0/type tells you what kind of queue it is, and
> inside the directory, you'll find which CPUs this queue is mapped to.
> Example:
> 
> root@r7625 /s/k/d/b/nvme0n1# cat hctx1/type 
> default
> 
> "default" means it's a read/write queue, so it'll handle both reads
> and
> writes.
> 
> root@r7625 /s/k/d/b/nvme0n1# ls hctx1/
> active  cpu11/   dispatch       sched_tags         tags
> busy    cpu266/  dispatch_busy  sched_tags_bitmap  tags_bitmap
> cpu10/  ctx_map  flags          state              type
> 
> and we can see this hardware queue is mapped to cpu 10/11/266.
> 
> That ties into how these are mapped. It's pretty simple - if a task
> is
> running on cpu 10/11/266 when it's queueing IO, then it'll use hw
> queue
> 1. This maps to the interrupts you found, but note that the admin
> queue
> (which is not listed these directories, as it's not an IO queue) is
> the
> first one there. hctx0 is nvme0q1 in your /proc/interrupts list.
> 
> If IO is queued on hctx1, then it should complete on the interrupt
> vector associated with nvme0q2.
> 
Jens,

I knew there were nvme experts here!
thx for your help.

# ls nvme0n1/hctx0/
active  busy  cpu0  cpu1  ctx_map  dispatch  dispatch_busy  flags 
sched_tags  sched_tags_bitmap  state  tags  tags_bitmap  type

it means that some I/O that I am unaware of is initiated either from
cpu0-cpu1...

It seems like nvme number of queues is configurable... I'll try to find
out how to reduce it to 1...

but my real problem is not really which I/O queue is assigned to a
request. It is the irq affinity assigned to the queues...

I have found the function:
nvme_setup_irqs() where the assignations happen.

Considering that I have the bootparams irqaffinity=3

I do not understand how the admin queue and hctx0 irqs can be assigned
to the cpu 0 and 1. It is as-if the irqaffinity param had no effect on 
MSIX interrupts affinity masks...






[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux