RE: [PATCH v4 rdma-next] RDMA/umem: Combine contiguous PAGE_SIZE regions in SGEs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>Subject: Re: [PATCH v4 rdma-next] RDMA/umem: Combine contiguous PAGE_SIZE
>regions in SGEs
>
>On Tue, Apr 02, 2019 at 12:56:27PM -0500, Shiraz Saleem wrote:
>> +/* ib_umem_add_sg_table - Add N contiguous pages to scatter table
>> + *
>> + * sg: current scatterlist entry
>> + * page_list: array of npage struct page pointers
>> + * npages: number of pages in page_list
>> + * max_seg_size: maximum segment size in bytes
>
>max_seg_sz
>
>> + * nents: [out] number of entries in the scatterlist
>> + *
>> + * Return new end of scatterlist
>> + */
>> +static struct scatterlist *ib_umem_add_sg_table(struct scatterlist *sg,
>> +						struct page **page_list,
>> +						unsigned long npages,
>> +						unsigned int max_seg_sz,
>> +						int *nents)
>> +{
>> +	unsigned long first_pfn;
>> +	unsigned long i = 0;
>> +	bool update_cur_sg = false;
>> +	bool first = !sg_page(sg);
>> +
>> +	/* Check if new page_list is contiguous with end of previous page_list.
>> +	 * sg->length here is a multiple of PAGE_SIZE and sg->offset is 0.
>> +	 */
>> +	if (!first && (page_to_pfn(sg_page(sg)) + (sg->length >> PAGE_SHIFT) ==
>> +		       page_to_pfn(page_list[0])))
>> +		update_cur_sg = true;
>> +
>> +	while (i != npages) {
>> +		unsigned long len;
>> +		struct page *first_page = page_list[i];
>> +
>> +		first_pfn = page_to_pfn(first_page);
>> +
>> +		/* Compute the number of contiguous pages we have starting
>> +		 * at i
>> +		 */
>> +		for (len = 0; i != npages &&
>> +			      first_pfn + len == page_to_pfn(page_list[i]);
>> +		     len++)
>> +			i++;
>> +
>> +		/* Squash N contiguous pages from page_list into current sge */
>> +		if (update_cur_sg &&
>> +		    ((sg->length + (len << PAGE_SHIFT)) < max_seg_sz))
>> {
>
>This add can still overflow.. should be something like
>
>  (max_seg_sz - sg->length) <= (len << PAGE_SHIFT)
>
>> diff --git a/include/rdma/ib_umem.h b/include/rdma/ib_umem.h index
>> 73af05d..ccc44c0 100644
>> +++ b/include/rdma/ib_umem.h
>> @@ -53,7 +53,7 @@ struct ib_umem {
>>  	struct work_struct	work;
>>  	struct sg_table sg_head;
>>  	int             nmap;
>> -	int             npages;
>> +	int             sg_nents;
>
>This probably should be unsigned int

I ll fix all of the above feedback in v5.

>
>Otherwise this seems fine - what HW were you able to test it on?

X722 mostly. v1 went through our test suite. Selvin also did some huge page testing on bnxt_re HW as well on v1 version (after fixing the pg array overflow bug).
https://patchwork.kernel.org/patch/10819993/

For v4, I did some minimal touch tests. Since most of changes since v1 haven't really effected the core of the algorithm.

Shiraz






[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux