RE: [PATCH v4 08/17] iommufd: Algorithms for PFN storage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> From: Jason Gunthorpe <jgg@xxxxxxxxxx>
> Sent: Tuesday, November 8, 2022 8:49 AM
> 
> @@ -171,7 +183,7 @@ static struct iopt_area
> *iopt_pages_find_domain_area(struct iopt_pages *pages,
>   */
>  struct pfn_batch {
>  	unsigned long *pfns;
> -	u16 *npfns;
> +	u32 *npfns;

why not making it u32 and removing later FIXME directly in patch7?

>  static bool batch_add_pfn(struct pfn_batch *batch, unsigned long pfn)
>  {
> -	/* FIXME: U16 is too small */
> +	const unsigned int MAX_NPFNS = type_max(typeof(*batch->npfns));

use lowercase i.e. max_npfns.

> +static void __iopt_area_unfill_domain(struct iopt_area *area,
> +				      struct iopt_pages *pages,
> +				      struct iommu_domain *domain,
> +				      unsigned long last_index)
> +{
> +	struct interval_tree_double_span_iter span;
> +	unsigned long start_index = iopt_area_index(area);
> +	unsigned long unmapped_end_index = start_index;
> +	u64 backup[BATCH_BACKUP_SIZE];
> +	struct pfn_batch batch;
> +
> +	lockdep_assert_held(&pages->mutex);
> +
> +	batch_init_backup(&batch, last_index + 1, backup, sizeof(backup));
> +	interval_tree_for_each_double_span(&span, &pages-
> >domains_itree,
> +					   &pages->access_itree, start_index,
> +					   last_index) {
> +		if (span.is_used) {
> +			batch_skip_carry(&batch,
> +					 span.last_used - span.start_used + 1);
> +			continue;
> +		}
> +		iopt_area_unpin_domain(&batch, area, pages, domain,
> +				       span.start_hole, span.last_hole,
> +				       &unmapped_end_index, last_index);
> +	}
> +	if (unmapped_end_index != last_index + 1)
> +		iopt_area_unmap_domain_range(area, domain,
> unmapped_end_index,
> +					     last_index);

a comment marking that it's for the last trailing used span of which
the pages are not contiguous to previous span.

btw it is not easy to understand how this func plus unpin_domain()
actually work. more comments are welcomed to help readability.

> +/*
> + * This can do everything and is fully coherent with what a iommu_domain
> would
> + * see.
> + */
> +static int iopt_pages_rw_slow(struct iopt_pages *pages,

Can you elaborate what guarantees coherency in this function and how it
becomes different in other rw variations?

 +/**
> + * iopt_pages_remove_access() - Release an in-kernel access for PFNs
> + * @area: The source of PFNs
> + * @start_index: First page index
> + * @last_index: Inclusive last page index
> + *
> + * Undo iopt_pages_add_access() and unpin the pages if necessary. The
> caller
> + * must stop using the PFNs before calling this.
> + */
> +void iopt_pages_remove_access(struct iopt_area *area, unsigned long
> start_index,
> +			      unsigned long last_index)

this is called iopt_pages_xxx() but the first parameter is iopt_area.

also it's not balanced with iopt_pages_add_access() which requires the
caller to hold pages->mutex and populate area->num_accesses.




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux