Re: [PATCH net-next] xsk: introduce xsk_dma_ops

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



From: Xuan Zhuo <xuanzhuo@xxxxxxxxxxxxxxxxx>
Date: Mon, 17 Apr 2023 11:27:50 +0800

> The purpose of this patch is to allow driver pass the own dma_ops to
> xsk.
> 
> This is to cope with the scene of virtio-net. If virtio does not have
> VIRTIO_F_ACCESS_PLATFORM, then virtio cannot use DMA API. In this case,
> XSK cannot use DMA API directly to achieve DMA address. Based on this
> scene, we must let XSK support driver to use the driver's dma_ops.
> 
> On the other hand, the implementation of XSK as a highlevel code
> should put the underlying operation of DMA to the driver layer.
> The driver layer determines the implementation of the final DMA. XSK
> should not make such assumptions. Everything will be simplified if DMA
> is done at the driver level.
> 
> More is here:
>     https://lore.kernel.org/virtualization/1681265026.6082013-1-xuanzhuo@xxxxxxxxxxxxxxxxx/
> 
> Signed-off-by: Xuan Zhuo <xuanzhuo@xxxxxxxxxxxxxxxxx>

[...]

>  struct xsk_buff_pool {
>  	/* Members only used in the control path first. */
>  	struct device *dev;
> @@ -85,6 +102,7 @@ struct xsk_buff_pool {
>  	 * sockets share a single cq when the same netdev and queue id is shared.
>  	 */
>  	spinlock_t cq_lock;
> +	struct xsk_dma_ops dma_ops;

Why full struct, not a const pointer? You'll have indirect calls either
way, copying the full struct won't reclaim you much performance.

>  	struct xdp_buff_xsk *free_heads[];
>  };
>  

[...]

> @@ -424,18 +426,29 @@ int xp_dma_map(struct xsk_buff_pool *pool, struct device *dev,
>  		return 0;
>  	}
>  
> +	if (!dma_ops) {
> +		pool->dma_ops.map_page = dma_map_page_attrs;
> +		pool->dma_ops.mapping_error = dma_mapping_error;
> +		pool->dma_ops.need_sync = dma_need_sync;
> +		pool->dma_ops.sync_single_range_for_device = dma_sync_single_range_for_device;
> +		pool->dma_ops.sync_single_range_for_cpu = dma_sync_single_range_for_cpu;
> +		dma_ops = &pool->dma_ops;
> +	} else {
> +		pool->dma_ops = *dma_ops;
> +	}

If DMA syncs are not needed on your x86_64 DMA-coherent system, it
doesn't mean we all don't need it. Instead of filling pointers with
"default" callbacks, you could instead avoid indirect calls at all when
no custom DMA ops are specified. Pls see how for example Christoph did
that for direct DMA. It would cost only one if-else for case without
custom DMA ops here instead of an indirect call each time.

(I *could* suggest using INDIRECT_CALL_WRAPPER(), but I won't, since
 it's more expensive than direct checking and I feel like it's more
 appropriate to check directly here)

> +
>  	dma_map = xp_create_dma_map(dev, pool->netdev, nr_pages, pool->umem);
>  	if (!dma_map)
>  		return -ENOMEM;
[...]

Thanks,
Olek



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux