Re: Fwd: [PATCH 1/1] RDMA/umem: add back hugepage sg list

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Mar 21, 2021 at 4:38 AM Jason Gunthorpe <jgg@xxxxxxxxxx> wrote:
>
> On Sat, Mar 20, 2021 at 11:38:26AM +0800, Zhu Yanjun wrote:
> > On Fri, Mar 19, 2021 at 9:48 PM Jason Gunthorpe <jgg@xxxxxxxxxx> wrote:
> > >
> > > On Fri, Mar 19, 2021 at 09:33:13PM +0800, Zhu Yanjun wrote:
> > > > On Fri, Mar 19, 2021 at 9:01 PM Jason Gunthorpe <jgg@xxxxxxxxxx> wrote:
> > > > >
> > > > > On Sat, Mar 13, 2021 at 11:02:41AM +0800, Zhu Yanjun wrote:
> > > > > > On Fri, Mar 12, 2021 at 10:01 PM Jason Gunthorpe <jgg@xxxxxxxxxx> wrote:
> > > > > > >
> > > > > > > On Fri, Mar 12, 2021 at 09:49:52PM +0800, Zhu Yanjun wrote:
> > > > > > > > In short, the sg list from __sg_alloc_table_from_pages is different
> > > > > > > > from the sg list from ib_umem_add_sg_table.
> > > > > > >
> > > > > > > I don't care about different. Tell me what is wrong with what we have
> > > > > > > today.
> > > > > > >
> > > > > > > I thought your first message said the sgl's were too small, but now
> > > > > > > you seem to say they are too big?
> > > > > >
> > > > > > Sure.
> > > > > >
> > > > > > The sg list from __sg_alloc_table_from_pages, length of sg is too big.
> > > > > > And the dma address is like the followings:
> > > > > >
> > > > > > "
> > > > > > sg_dma_address(sg):0x4b3c1ce000
> > > > > > sg_dma_address(sg):0x4c3c1cd000
> > > > > > sg_dma_address(sg):0x4d3c1cc000
> > > > > > sg_dma_address(sg):0x4e3c1cb000
> > > > > > "
> > > > >
> > > > > Ok, so how does too big a dma segment side cause
> > > > > __sg_alloc_table_from_pages() to return sg elements that are too
> > > > > small?
> > > > >
> > > > > I assume there is some kind of maths overflow here?
> > > > Please check this function __sg_alloc_table_from_pages
> > > > "
> > > > ...
> > > >  457                 /* Merge contiguous pages into the last SG */
> > > >  458                 prv_len = prv->length;
> > > >  459                 while (n_pages && page_to_pfn(pages[0]) == paddr) {
> > > >  460                         if (prv->length + PAGE_SIZE >
> > > > max_segment)    <--max_segment is too big. So n_pages will be 0. Then
> > > > the function will goto out to exit.
> > >
> > > You already said this.
> > >
> > > You are reporting 4k pages, if max_segment is larger than 4k there is
> > > no such thing as "too big"
> > >
> > > I assume it is "too small" because of some maths overflow.
> >
> >  459                 while (n_pages && page_to_pfn(pages[0]) == paddr) {
> >  460                         if (prv->length + PAGE_SIZE >
> > max_segment)  <--it max_segment is big, n_pages is zero.
>
> What does n_pages have to do with max_segment?

With the following snippet
"
        struct ib_umem *region;
        region = ib_umem_get(pd->device, start, len, access);

        page_size = ib_umem_find_best_pgsz(region,
                                           SZ_4K | SZ_2M | SZ_1G,
                                           virt);
"
Before the commit 0c16d9635e3a ("RDMA/umem: Move to allocate SG table
from pages"),
the variable page_size is SZ_2M.
After the commit 0c16d9635e3a ("RDMA/umem: Move to allocate SG table
from pages"),
the variable page_size is SZ_4K.

IMHO, you can reproduce this problem in your local host.

Zhu Yanjun
>
> Please try to explain clearly.
>
> Jason



[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux