RE: [PATCH v4 07/17] iommufd: PFN handling for iopt_pages

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> From: Jason Gunthorpe <jgg@xxxxxxxxxx>
> Sent: Tuesday, November 8, 2022 8:49 AM
> 
> +
> +struct interval_tree_double_span_iter {
> +	struct rb_root_cached *itrees[2];
> +	struct interval_tree_span_iter spans[2];
> +	union {
> +		unsigned long start_hole;
> +		unsigned long start_used;
> +	};
> +	union {
> +		unsigned long last_hole;
> +		unsigned long last_used;
> +	};
> +	/* 0 = hole, 1 = used span[0], 2 = used span[1], -1 done iteration */
> +	int is_used;
> +};

lack of a comment how this expects to be used as done for
struct interval_tree_span_iter. e.g. there is no value
representing used by both spans which implies this is used
to find valid range in either side. Those should be spelled out.

> +/*
> + * The IOVA to PFN map. The mapper automatically copies the PFNs into
> multiple

what is the mapper?

> + * Be cautious of overflow, an IOVA can go all the way up to U64_MAX, so
> + * last_iova + 1 can overflow. An iopt_pages index will always be much less
> than
> + * ULONG_MAX< so last_index + 1 cannot overflow.

remove trailing '<' after ULONG_MAX

> +
> +	ret = iommu_unmap(domain, iova, size);
> +	/*
> +	 * It is a logic error in this code or a driver bug if the IOMMU unmaps
> +	 * something other than exactly as requested. This implies that the
> +	 * iommu driver may not fail unmap for reasons beyond bad
> agruments.
> +	 * Particularly, the iommu driver may not do a memory allocation on
> the
> +	 * unmap path.
> +	 */

didn't understand the last sentence.

> +static void batch_skip_carry(struct pfn_batch *batch, unsigned int skip_pfns)

add a comment similar to batch_clear_carry()

> +{
> +	if (!batch->total_pfns)
> +		return;
> +	skip_pfns = min(batch->total_pfns, skip_pfns);
> +	batch->pfns[0] += skip_pfns;
> +	batch->npfns[0] -= skip_pfns;

what about skip_pfns exceeds batch->npfns[0]? looks this works only
if batch->total_pfns = batch->npfns[0]...

> +/* true if the pfn could be added, false otherwise */
> +static bool batch_add_pfn(struct pfn_batch *batch, unsigned long pfn)
> +{
> +	/* FIXME: U16 is too small */

performance or functional impact?

what would be the fix? and why cannot it be done now?

more comment is welcomed.

> +static void batch_unpin(struct pfn_batch *batch, struct iopt_pages *pages,
> +			unsigned int offset, size_t npages)
> +{
> +	unsigned int cur = 0;
> +
> +	while (offset) {
> +		if (batch->npfns[cur] > offset)
> +			break;
> +		offset -= batch->npfns[cur];
> +		cur++;
> +	}

'offset' usually means byte-addressed. 'index' is a better fit in
this context.

<to be continued>




[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux