On Sat, Sep 07, 2019 at 06:19:20AM +0800, Ming Lei wrote: > On Fri, Sep 06, 2019 at 05:50:49PM +0000, Long Li wrote: > > >Subject: Re: [PATCH 1/4] softirq: implement IRQ flood detection mechanism > > > > > >On Fri, Sep 06, 2019 at 09:48:21AM +0800, Ming Lei wrote: > > >> When one IRQ flood happens on one CPU: > > >> > > >> 1) softirq handling on this CPU can't make progress > > >> > > >> 2) kernel thread bound to this CPU can't make progress > > >> > > >> For example, network may require softirq to xmit packets, or another > > >> irq thread for handling keyboards/mice or whatever, or rcu_sched may > > >> depend on that CPU for making progress, then the irq flood stalls the > > >> whole system. > > >> > > >> > > > >> > AFAIU, there are fast medium where the responses to requests are > > >> > faster than the time to process them, right? > > >> > > >> Usually medium may not be faster than CPU, now we are talking about > > >> interrupts, which can be originated from lots of devices concurrently, > > >> for example, in Long Li'test, there are 8 NVMe drives involved. > > > > > >Why are all 8 nvmes sharing the same CPU for interrupt handling? > > >Shouldn't matrix_find_best_cpu_managed() handle selecting the least used > > >CPU from the cpumask for the effective interrupt handling? > > > > The tests run on 10 NVMe disks on a system of 80 CPUs. Each NVMe disk has 32 hardware queues. > > Then there are total 320 NVMe MSI/X vectors, and 80 CPUs, so irq matrix > can't avoid effective CPUs overlapping at all. > > > It seems matrix_find_best_cpu_managed() has done its job, but we may still have CPUs that service several hardware queues mapped from other issuing CPUs. > > Another thing to consider is that there may be other managed interrupts on the system, so NVMe interrupts may not end up evenly distributed on such a system. > > Another improvement could be to try to not overlap effective CPUs among > vectors of fast device first, meantime allow the overlap between slow > vectors and fast vectors. > > This way could improve in case that total fast vectors are <= nr_cpu_cores. For this particular case, it can't be done, because: 1) this machine has 10 NUMA nodes, and each NVMe has 8 hw queues, so too many CPUs are assigned to the 1st two hw queues, see the code branch of 'if (numvecs <= nodes)' in __irq_build_affinity_masks(). 2) then less CPUs are assigned to the other 6 hw queues 3) finally same effective CPU is shared by two IRQ vector. Also looks matrix_find_best_cpu_managed() has been doing well enough for choosing best effective CPU. Thanks, Ming