Thomas Gleixner <tglx@xxxxxxxxxxxxx> 于2020年2月1日周六 下午5:19写道: > > Weiping Zhang <zhangweiping@xxxxxxxxxxxxxx> writes: > > > nvme driver will add 4 sets for supporting NVMe weighted round robin, > > and some of these sets may be empty(depends on user configuration), > > so each particular set is assigned one static index for avoiding the > > management trouble, then the empty set will be been by > > irq_create_affinity_masks(). > > What's the point of an empty interrupt set in the first place? This does > not make sense and smells like a really bad hack. > > Can you please explain in detail why this is required and why it > actually makes sense? > Hi Thomas, Sorry to late reply, I will post new patch to avoid creating empty sets. In this version, nvme add extra 4 sets, because nvme will split its io queues into 7 parts (poll, default, read, wrr_low, wrr_medium, wrr_high, wrr_urgent), the poll queues does not use irq, so nvme will has at most 6 irq sets. And nvme driver use two variables(dev->io_queues[index] and affd->set_size[index]) to track how many queues/irqs in each part. And the user may set some queues count to 0, for example: nvme use 96 io queues. default dev->io_queues[0]=90 affd->set_size[0] = 90 read dev->io_queues[1]=0 affd->set_size[1] = 0 wrr low dev->io_queues[2]=0 affd->set_size[2] = 0 wrr medium dev->io_queues[3]=0 affd->set_size[3] = 0 wrr high dev->io_queues[4]=6 affd->set_size[4] = 6 wrr urgent dev->io_queues[5]=0 affd->set_size[5] = 0 In this case the index from 1 to 3 will has 0 irqs. But actually, it's no need to use fixed index for io_queues and set_size, nvme just tells irq engine, how many irq_sets it has, and how may irqs in each set, so i will post V5 to solve this problem. nr_sets = 1; dev->io_queues[HCTX_TYPE_DEFAULT] = nr_default; affd->set_size[nr_sets - 1] = nr_default; dev->io_queues[HCTX_TYPE_READ] = nr_read; if (nr_read) { nr_sets++; affd->set_size[nr_sets - 1] = nr_read; } dev->io_queues[HCTX_TYPE_WRR_LOW] = nr_wrr_low; if (nr_wrr_low) { nr_sets++; affd->set_size[nr_sets - 1] = nr_wrr_low; } dev->io_queues[HCTX_TYPE_WRR_MEDIUM] = nr_wrr_medium; if (nr_wrr_medium) { nr_sets++; affd->set_size[nr_sets - 1] = nr_wrr_medium; } dev->io_queues[HCTX_TYPE_WRR_HIGH] = nr_wrr_high; if (nr_wrr_high) { nr_sets++; affd->set_size[nr_sets - 1] = nr_wrr_high; } dev->io_queues[HCTX_TYPE_WRR_URGENT] = nr_wrr_urgent; if (nr_wrr_urgent) { nr_sets++; affd->set_size[nr_sets - 1] = nr_wrr_urgent; } affd->nr_sets = nr_sets; Thanks Weiping