Re: [PATCH v4 07/14] drm/panthor: Add the MMU/VM logical block

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 22/01/2024 16:30, Boris Brezillon wrote:
> MMU and VM management is related and placed in the same source file.
> 
> Page table updates are delegated to the io-pgtable-arm driver that's in
> the iommu subsystem.
> 
> The VM management logic is based on drm_gpuva_mgr, and is assuming the
> VA space is mostly managed by the usermode driver, except for a reserved
> portion of this VA-space that's used for kernel objects (like the heap
> contexts/chunks).
> 
> Both asynchronous and synchronous VM operations are supported, and
> internal helpers are exposed to allow other logical blocks to map their
> buffers in the GPU VA space.
> 
> There's one VM_BIND queue per-VM (meaning the Vulkan driver can only
> expose one sparse-binding queue), and this bind queue is managed with
> a 1:1 drm_sched_entity:drm_gpu_scheduler, such that each VM gets its own
> independent execution queue, avoiding VM operation serialization at the
> device level (things are still serialized at the VM level).
> 
> The rest is just implementation details that are hopefully well explained
> in the documentation.

panthor_vm_map_pages() looks a bit suspect (see below) and there's a
kerneldoc header which needs updating. But generally it looks fine.

> 
> v4:
> - Add an helper to return the VM state
> - Check drmm_mutex_init() return code
> - Remove the VM from the AS reclaim list when panthor_vm_active() is
>   called
> - Count the number of active VM users instead of considering there's
>   at most one user (several scheduling groups can point to the same
>   vM)
> - Pre-allocate a VMA object for unmap operations (unmaps can trigger
>   a sm_step_remap() call)
> - Check vm->root_page_table instead of vm->pgtbl_ops to detect if
>   the io-pgtable is trying to allocate the root page table
> - Don't memset() the va_node in panthor_vm_alloc_va(), make it a
>   caller requirement
> - Fix the kernel doc in a few places
> - Drop the panthor_vm::base offset constraint and modify
>   panthor_vm_put() to explicitly check for a NULL value
> - Fix unbalanced vm_bo refcount in panthor_gpuva_sm_step_remap()
> - Drop stale comments about the shared_bos list
> - Patch mmu_features::va_bits on 32-bit builds to reflect the
>   io_pgtable limitation and let the UMD know about it
> 
> v3:
> - Add acks for the MIT/GPL2 relicensing
> - Propagate MMU faults to the scheduler
> - Move pages pinning/unpinning out of the dma_signalling path
> - Fix 32-bit support
> - Rework the user/kernel VA range calculation
> - Make the auto-VA range explicit (auto-VA range doesn't cover the full
>   kernel-VA range on the MCU VM)
> - Let callers of panthor_vm_alloc_va() allocate the drm_mm_node
>   (embedded in panthor_kernel_bo now)
> - Adjust things to match the latest drm_gpuvm changes (extobj tracking,
>   resv prep and more)
> - Drop the per-AS lock and use slots_lock (fixes a race on vm->as.id)
> - Set as.id to -1 when reusing an address space from the LRU list
> - Drop misleading comment about page faults
> - Remove check for irq being assigned in panthor_mmu_unplug()
> 
> Co-developed-by: Steven Price <steven.price@xxxxxxx>
> Signed-off-by: Steven Price <steven.price@xxxxxxx>
> Signed-off-by: Boris Brezillon <boris.brezillon@xxxxxxxxxxxxx>
> Acked-by: Steven Price <steven.price@xxxxxxx> # MIT+GPL2 relicensing,Arm
> Acked-by: Grant Likely <grant.likely@xxxxxxxxxx> # MIT+GPL2 relicensing,Linaro
> Acked-by: Boris Brezillon <boris.brezillon@xxxxxxxxxxxxx> # MIT+GPL2 relicensing,Collabora
> ---
>  drivers/gpu/drm/panthor/panthor_mmu.c | 2760 +++++++++++++++++++++++++
>  drivers/gpu/drm/panthor/panthor_mmu.h |  102 +
>  2 files changed, 2862 insertions(+)
>  create mode 100644 drivers/gpu/drm/panthor/panthor_mmu.c
>  create mode 100644 drivers/gpu/drm/panthor/panthor_mmu.h
> 
> diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
> new file mode 100644
> index 000000000000..d3ce29cd0662

<snip>

> +static int
> +panthor_vm_map_pages(struct panthor_vm *vm, u64 iova, int prot,
> +		     struct sg_table *sgt, u64 offset, u64 size)
> +{
> +	struct panthor_device *ptdev = vm->ptdev;
> +	unsigned int count;
> +	struct scatterlist *sgl;
> +	struct io_pgtable_ops *ops = vm->pgtbl_ops;
> +	u64 start_iova = iova;
> +	int ret;
> +
> +	if (!size)
> +		return 0;
> +
> +	for_each_sgtable_dma_sg(sgt, sgl, count) {
> +		dma_addr_t paddr = sg_dma_address(sgl);
> +		size_t len = sg_dma_len(sgl);
> +
> +		if (len <= offset) {
> +			offset -= len;
> +			continue;
> +		}
> +
> +		paddr -= offset;

Shouldn't that be "+="?

> +		len -= offset;
> +
> +		if (size >= 0) {

size is unsigned... so this is always true!

> +			len = min_t(size_t, len, size);
> +			size -= len;
> +		}
> +
> +		drm_dbg(&ptdev->base, "map: as=%d, iova=%llx, paddr=%pad, len=%zx",
> +			vm->as.id, iova, &paddr, len);
> +
> +		while (len) {
> +			size_t pgcount, mapped = 0;
> +			size_t pgsize = get_pgsize(iova | paddr, len, &pgcount);
> +
> +			ret = ops->map_pages(ops, iova, paddr, pgsize, pgcount, prot,
> +					     GFP_KERNEL, &mapped);
> +			iova += mapped;
> +			paddr += mapped;
> +			len -= mapped;
> +
> +			if (drm_WARN_ON(&ptdev->base, !ret && !mapped))
> +				ret = -ENOMEM;
> +
> +			if (ret) {
> +				/* If something failed, unmap what we've already mapped before
> +				 * returning. The unmap call is not supposed to fail.
> +				 */
> +				drm_WARN_ON(&ptdev->base,
> +					    panthor_vm_unmap_pages(vm, start_iova,
> +								   iova - start_iova));
> +				return ret;
> +			}
> +		}
> +
> +		if (!size)
> +			break;
> +	}
> +
> +	return panthor_vm_flush_range(vm, start_iova, iova - start_iova);
> +}
> +

<snip>

> +static void panthor_vm_destroy(struct panthor_vm *vm)
> +{
> +	if (!vm)
> +		return;
> +
> +	vm->destroyed = true;
> +
> +	mutex_lock(&vm->heaps.lock);
> +	panthor_heap_pool_destroy(vm->heaps.pool);
> +	vm->heaps.pool = NULL;
> +	mutex_unlock(&vm->heaps.lock);
> +
> +	drm_WARN_ON(&vm->ptdev->base,
> +		    panthor_vm_unmap_range(vm, vm->base.mm_start, vm->base.mm_range));
> +	panthor_vm_put(vm);
> +}
> +
> +/**
> + * panthor_vm_destroy() - Destroy a VM.
      ^^^^^^^^^^^^^^^^^^ needs updating to the new function name.

Steve

> + * @pool: VM pool.
> + * @handle: VM handle.
> + *
> + * This function doesn't free the VM object or its resources, it just kills
> + * all mappings, and makes sure nothing can be mapped after that point.
> + *
> + * If there was any active jobs at the time this function is called, these
> + * jobs should experience page faults and be killed as a result.
> + *
> + * The VM resources are freed when the last reference on the VM object is
> + * dropped.
> + */
> +int panthor_vm_pool_destroy_vm(struct panthor_vm_pool *pool, u32 handle)
> +{
> +	struct panthor_vm *vm;
> +
> +	vm = xa_erase(&pool->xa, handle);
> +
> +	panthor_vm_destroy(vm);
> +
> +	return vm ? 0 : -EINVAL;
> +}

<snip>




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux