Re: [PATCH v2 1/6] blk-mq: introduce blk_mq_hctx_map_queues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Nov 11, 2024 at 07:02:09PM +0100, Daniel Wagner wrote:
> blk_mq_pci_map_queues and blk_mq_virtio_map_queues will create a CPU to
> hardware queue mapping based on affinity information. These two function
> share common code and only differ on how the affinity information is
> retrieved. Also, those functions are located in the block subsystem
> where it doesn't really fit in. They are virtio and pci subsystem
> specific.
> 
> Introduce a new callback in struct bus_type to get the affinity mask.
> The callbacks can then be populated by the subsystem directly.
> 
> All but one driver use the subsystem default affinity masks. hisi_sas v2
> depends on a driver specific mapping, thus use the optional argument
> get_queue_affinity to retrieve the mapping.
> 
> Original-by : Ming Lei <ming.lei@xxxxxxxxxx>
> Signed-off-by: Daniel Wagner <wagi@xxxxxxxxxx>
> ---
>  block/blk-mq-cpumap.c      | 40 ++++++++++++++++++++++++++++++++++++++++
>  drivers/pci/pci-driver.c   | 16 ++++++++++++++++
>  drivers/virtio/virtio.c    | 12 ++++++++++++
>  include/linux/blk-mq.h     |  5 +++++
>  include/linux/device/bus.h |  3 +++
>  5 files changed, 76 insertions(+)
> 
> diff --git a/block/blk-mq-cpumap.c b/block/blk-mq-cpumap.c
> index 9638b25fd52124f0173e968ebdca5f1fe0b42ad9..4dd703f5ee647fd1ba0b14ca11ddfdefa98a9a25 100644
> --- a/block/blk-mq-cpumap.c
> +++ b/block/blk-mq-cpumap.c
> @@ -54,3 +54,43 @@ int blk_mq_hw_queue_to_node(struct blk_mq_queue_map *qmap, unsigned int index)
>  
>  	return NUMA_NO_NODE;
>  }
> +
> +/**
> + * blk_mq_hctx_map_queues - Create CPU to hardware queue mapping
> + * @qmap:	CPU to hardware queue map.
> + * @dev:	The device to map queues.
> + * @offset:	Queue offset to use for the device.
> + * @get_irq_affinity:	Optional callback to retrieve queue affinity.
> + *
> + * Create a CPU to hardware queue mapping in @qmap. For each queue
> + * @get_queue_affinity will be called. If @get_queue_affinity is not
> + * provided, then the bus_type irq_get_affinity callback will be
> + * used to retrieve the affinity.
> + */
> +void blk_mq_hctx_map_queues(struct blk_mq_queue_map *qmap,
> +			    struct device *dev, unsigned int offset,
> +			    get_queue_affinity_fn *get_irq_affinity)
> +{
> +	const struct cpumask *mask = NULL;
> +	unsigned int queue, cpu;
> +
> +	for (queue = 0; queue < qmap->nr_queues; queue++) {
> +		if (get_irq_affinity)
> +			mask = get_irq_affinity(dev, queue + offset);
> +		else if (dev->bus->irq_get_affinity)
> +			mask = dev->bus->irq_get_affinity(dev, queue + offset);
> +
> +		if (!mask)
> +			goto fallback;
> +
> +		for_each_cpu(cpu, mask)
> +			qmap->mq_map[cpu] = qmap->queue_offset + queue;
> +	}
> +
> +	return;
> +
> +fallback:
> +	WARN_ON_ONCE(qmap->nr_queues > 1);
> +	blk_mq_clear_mq_map(qmap);
> +}
> +EXPORT_SYMBOL_GPL(blk_mq_hctx_map_queues);
> diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c
> index 35270172c833186995aebdda6f95ab3ffd7c67a0..59e5f430a380285162a87bd1a9b392bba8066450 100644
> --- a/drivers/pci/pci-driver.c
> +++ b/drivers/pci/pci-driver.c
> @@ -1670,6 +1670,21 @@ static void pci_dma_cleanup(struct device *dev)
>  		iommu_device_unuse_default_domain(dev);
>  }
>  
> +/**
> + * pci_device_irq_get_affinity - get affinity mask queue mapping for PCI device
> + * @dev: ptr to dev structure
> + * @irq_vec: interrupt vector number
> + *
> + * This function returns for a queue the affinity mask for a PCI device.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux