On Wed, Jul 29, 2020 at 06:57:55PM -0700, Eric Dumazet wrote: > Mapping as little as 64GB can take more than 10 seconds, > triggering issues on kernels with CONFIG_PREEMPT_NONE=y. > > ib_umem_get() already splits the work in 2MB units on x86_64, > adding a cond_resched() in the long-lasting loop is enough > to solve the issue. > > Note that sg_alloc_table() can still use more than 100 ms, > which is also problematic. This might be addressed later > in ib_umem_add_sg_table(), adding new blocks in sgl > on demand. I have seen some patches in progress to do exactly this, the motivation is to reduce the memory consumption if a lot of pages are combined. > Signed-off-by: Eric Dumazet <edumazet@xxxxxxxxxx> > Cc: Doug Ledford <dledford@xxxxxxxxxx> > Cc: Jason Gunthorpe <jgg@xxxxxxxx> > Cc: linux-rdma@xxxxxxxxxxxxxxx > --- > drivers/infiniband/core/umem.c | 1 + > 1 file changed, 1 insertion(+) Why [PATCH net] ? Anyhow, applied to rdma for-next Thanks, Jason