Re: [PATCH RFC v3 08/35] mm: cma: Introduce cma_alloc_range()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On Wed, Jan 31, 2024 at 11:54:17AM +0530, Anshuman Khandual wrote:
> 
> 
> On 1/30/24 17:05, Alexandru Elisei wrote:
> > Hi,
> > 
> > On Tue, Jan 30, 2024 at 10:50:00AM +0530, Anshuman Khandual wrote:
> >>
> >> On 1/25/24 22:12, Alexandru Elisei wrote:
> >>> Today, cma_alloc() is used to allocate a contiguous memory region. The
> >>> function allows the caller to specify the number of pages to allocate, but
> >>> not the starting address. cma_alloc() will walk over the entire CMA region
> >>> trying to allocate the first available range of the specified size.
> >>>
> >>> Introduce cma_alloc_range(), which makes CMA more versatile by allowing the
> >>> caller to specify a particular range in the CMA region, defined by the
> >>> start pfn and the size.
> >>>
> >>> arm64 will make use of this function when tag storage management will be
> >>> implemented: cma_alloc_range() will be used to reserve the tag storage
> >>> associated with a tagged page.
> >> Basically, you would like to pass on a preferred start address and the
> >> allocation could just fail if a contig range is not available from such
> >> a starting address ?
> >>
> >> Then why not just change cma_alloc() to take a new argument 'start_pfn'.
> >> Why create a new but almost similar allocator ?
> > I tried doing that, and I gave up because:
> > 
> > - It made cma_alloc() even more complex and hard to follow.
> > 
> > - What value should 'start_pfn' be to tell cma_alloc() that it should be
> >   ignored? Or, to put it another way, what pfn number is invalid on **all**
> >   platforms that Linux supports?
> > 
> > I can give it another go if we can come up with an invalid value for
> > 'start_pfn'.
> 
> Something negative might work. How about -1/-1UL ? A quick search gives
> some instances such as ...
> 
> git grep "pfn == -1"
> 
> mm/mm_init.c:   if (*start_pfn == -1UL)
> mm/vmscan.c:            if (pfn == -1)
> mm/vmscan.c:            if (pfn == -1)
> mm/vmscan.c:            if (pfn == -1)
> tools/testing/selftests/mm/hugepage-vmemmap.c:  if (pfn == -1UL) {
> 
> Could not -1UL be abstracted as common macro MM_INVALID_PFN to be used in
> such scenarios including here ?

Ah yes, you are right, get_pte_pfn() already uses -1 as an invalid pfn, so
I can just use that.

Will definitely give it a go on the next iteration, thanks for the
suggestion!

> 
> > 
> >> But then I am wondering why this could not be done in the arm64 platform
> >> code itself operating on a CMA area reserved just for tag storage. Unless
> >> this new allocator has other usage beyond MTE, this could be implemented
> >> in the platform itself.
> > I had the same idea in the previous iteration, David Hildenbrand suggested
> > this approach [1].
> > 
> > [1] https://lore.kernel.org/linux-fsdevel/2aafd53f-af1f-45f3-a08c-d11962254315@xxxxxxxxxx/
> 
> There are two different cma_alloc() proposals here - including the next
> patch i.e mm: cma: Fast track allocating memory when the pages are free
> 
> 1) Augment cma_alloc() or add cma_alloc_range() with start_pfn parameter
> 2) Speed up cma_alloc() for small allocation requests when pages are free
> 
> The second one if separated out from this series could be considered on
> its own as it will help all existing cma_alloc() callers. The first one
> definitely needs an use case as provided in this series.

I understand, thanks for the input!

Alex




[Index of Archives]     [Linux Kernel]     [Kernel Newbies]     [x86 Platform Driver]     [Netdev]     [Linux Wireless]     [Netfilter]     [Bugtraq]     [Linux Filesystems]     [Yosemite Discussion]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]

  Powered by Linux