Re: [PATCH] mm/gup: restore the ability to pin more than 2GB at a time

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Oct 30, 2024 at 11:34:49AM -0700, John Hubbard wrote:

> From a very high level design perspective, it's not yet clear to me
> that there is either a "preferred" or "not recommended" aspect to
> pinning in batches vs. all at once here, as long as one stays
> below the type (int, long, unsigned...) limits of the API. Batching
> seems like what you do if the internal implementation is crippled
> and unable to meet its API requirements. So the fact that many
> callers do batching is sort of "tail wags dog".

No.. all things need to do batching because nothing should be storing
a linear struct page array that is so enormous. That is going to
create vmemap pressure that is not desirable.

For instance rdma pins in batches and copies the pins into a scatter
list and never has an allocation over PAGE_SIZE.

iommufd transfers them into a radix tree.

It is not so much that there is a limit, but that good kernel code
just *shouldn't* be allocating gigantic contiguous memory arrays at
all.

Jason




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux