Re: [PATCH for-next] RDMA/core: Fix best page size finding when it can cross SG entries

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Feb 13, 2025 at 02:51:26PM +0200, Leon Romanovsky wrote:
> diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
> index e7e428369159..63a92d6cfbc2 100644
> --- a/drivers/infiniband/core/umem.c
> +++ b/drivers/infiniband/core/umem.c
> @@ -112,8 +112,7 @@ unsigned long ib_umem_find_best_pgsz(struct ib_umem *umem,
>                 /* If the current entry is physically contiguous with the previous
>                  * one, no need to take its start addresses into consideration.
>                  */
> -               if (curr_base + curr_len != sg_dma_address(sg)) {
> -
> +               if (curr_base != sg_dma_address(sg) - curr_len) {
>                         curr_base = sg_dma_address(sg);
>                         curr_len = 0;

I'm not sure about this, what ensures sg_dma_address() > curr_len?

curr_base + curr_len could also overflow, we've seen that AMD IOMMU
sometimes uses the very high addresess already

Jason




[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux