Re: [PATCH rdma-next 3/6] RDMA/umem: Add API to return optimal HW DMA addresses from SG list

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jan 02, 2019 at 05:19:55PM -0700, Jason Gunthorpe wrote:
> On Mon, Dec 24, 2018 at 04:32:24PM -0600, Shiraz Saleem wrote:
> > This helper iterates the SG list and returns suitable
> > HW aligned DMA addresses within a driver supported
> > page size. The implementation is intended to work for
> > HW that support single page sizes or mixed page sizes
> > in an MR. This avoids the need for having driver specific
> > algorithms to achieve the same thing and redundant walks
> > of the SG list.
> >
[..]
 
> > + * ib_umem_next_phys_iter - SG list iterator that returns aligned HW address
> > + * @umem: umem struct
> > + * @sg_phys_iter: SG HW address iterator
> > + * @supported_pgsz: bitmask of HW supported page sizes
> > + *
> > + * This helper iterates over the SG list and returns the HW
> > + * address aligned to a supported HW page size.
> 
> I think more documentation is appropriate here..

OK.
 
> Each true result returns a contiguous aligned block of memory such
> that:
> 
> - pg_bit indicate the alignment of this block such that
>     phyaddr & ((1 << pg_bit) - 1) == 0
> 
> - All blocks except the starting block has a zero offset.
>   For the starting block offset indicates the first valid byte in the
>   MR, HW should not permit access to bytes earlier that offset.
> 
>   [And how is this being done in i40iw? A side effect of the user
>    virtual bad address?]

Yes we have that restriction for the start offsets to align for user-VA buf
and physical buf. And use the least significant bits of VA.

> - For all blocks except the last len + offset equals 1 << pg_bit
> 
> False is returned when iteration is completed and all blocks have been
> seen.
> 
> > + */
> > +bool ib_umem_next_phys_iter(struct ib_umem *umem,
> > +			    struct sg_phys_iter *sg_phys_iter)
> > +{
> > +	unsigned long pg_mask;
> > +
> > +	if (!sg_phys_iter->sg || !sg_phys_iter->supported_pgsz)
> > +		return false;
> > +
> > +	if (sg_phys_iter->remaining) {
> > +		sg_phys_iter->phyaddr += (sg_phys_iter->len + sg_phys_iter->offset);
> > +	} else {
> > +		sg_phys_iter->phyaddr = sg_dma_address(sg_phys_iter->sg);
> > +		sg_phys_iter->remaining = sg_dma_len(sg_phys_iter->sg);
> > +	}
> > +
> > +	/* Mixed page support in MR*/
> > +	if (sg_phys_iter->mixed_pg_support)
> > +		sg_phys_iter->pg_bit = ib_umem_find_mixed_pg_bit(umem->sg_head.sgl,
> > +								 sg_phys_iter);
> > +
> > +	pg_mask = ~(BIT_ULL(sg_phys_iter->pg_bit) - 1);
> > +
> > +	sg_phys_iter->offset = sg_phys_iter->phyaddr & ~pg_mask;
> > +	sg_phys_iter->phyaddr = sg_phys_iter->phyaddr & pg_mask;
> > +	sg_phys_iter->len = min_t(unsigned long, sg_phys_iter->remaining,
> > +			BIT_ULL(sg_phys_iter->pg_bit) - sg_phys_iter->offset);
> 
> Maybe drop len and offset? Nothing uses them in this series, right?
> 

The individual drivers dont use them in the series, but could in the future to get
len/offset of each aligned block. They are currenlty just internal tracking fields
that in my purview improve readability in this function.

Shiraz



[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux