Re: [PATCH v4 05/10] blk-mq: introduce blk_mq_hctx_map_queues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Nov 14, 2024 at 09:58:25AM +0800, Ming Lei wrote:
> > +void blk_mq_hctx_map_queues(struct blk_mq_queue_map *qmap,
> 
> Some drivers may not know hctx at all, maybe blk_mq_map_hw_queues()?

I am not really attach to the name, I am fine with renaming it to
blk_mq_map_hw_queues.

> > +	if (dev->driver->irq_get_affinity)
> > +		irq_get_affinity = dev->driver->irq_get_affinity;
> > +	else if (dev->bus->irq_get_affinity)
> > +		irq_get_affinity = dev->bus->irq_get_affinity;
> 
> It is one generic API, I think both 'dev->driver' and
> 'dev->bus' should be validated here.

What do you have in mind here if we get two masks? What should the
operation be: AND, OR?

This brings up another topic I left out in this series.
blk_mq_map_queues does almost the same thing except it starts with the
mask returned by group_cpus_evenely. If we figure out how this could be
combined in a sane way it's possible to cleanup even a bit more. A bunch
of drivers do

		if (i != HCTX_TYPE_POLL && offset)
			blk_mq_hctx_map_queues(map, dev->dev, offset);
		else
			blk_mq_map_queues(map);

IMO it would be nice just to have one blk_mq_map_queues() which handles
this correctly for both cases.




[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux