Re: [RFC 0/7] Support high-order page bulk allocation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Aug 17, 2020 at 06:44:50PM +0200, David Hildenbrand wrote:
> On 17.08.20 18:30, Minchan Kim wrote:
> > On Mon, Aug 17, 2020 at 05:45:59PM +0200, David Hildenbrand wrote:
> >> On 17.08.20 17:27, Minchan Kim wrote:
> >>> On Sun, Aug 16, 2020 at 02:31:22PM +0200, David Hildenbrand wrote:
> >>>> On 14.08.20 19:31, Minchan Kim wrote:
> >>>>> There is a need for special HW to require bulk allocation of
> >>>>> high-order pages. For example, 4800 * order-4 pages.
> >>>>>
> >>>>> To meet the requirement, a option is using CMA area because
> >>>>> page allocator with compaction under memory pressure is
> >>>>> easily failed to meet the requirement and too slow for 4800
> >>>>> times. However, CMA has also the following drawbacks:
> >>>>>
> >>>>>  * 4800 of order-4 * cma_alloc is too slow
> >>>>>
> >>>>> To avoid the slowness, we could try to allocate 300M contiguous
> >>>>> memory once and then split them into order-4 chunks.
> >>>>> The problem of this approach is CMA allocation fails one of the
> >>>>> pages in those range couldn't migrate out, which happens easily
> >>>>> with fs write under memory pressure.
> >>>>
> >>>> Why not chose a value in between? Like try to allocate MAX_ORDER - 1
> >>>> chunks and split them. That would already heavily reduce the call frequency.
> >>>
> >>> I think you meant this:
> >>>
> >>>     alloc_pages(GFP_KERNEL|__GFP_NOWARN, MAX_ORDER - 1)
> >>>
> >>> It would work if system has lots of non-fragmented free memory.
> >>> However, once they are fragmented, it doesn't work. That's why we have
> >>> seen even order-4 allocation failure in the field easily and that's why
> >>> CMA was there.
> >>>
> >>> CMA has more logics to isolate the memory during allocation/freeing as
> >>> well as fragmentation avoidance so that it has less chance to be stealed
> >>> from others and increase high success ratio. That's why I want this API
> >>> to be used with CMA or movable zone.
> >>
> >> I was talking about doing MAX_ORDER - 1 CMA allocations instead of one
> >> big 300M allocation. As you correctly note, memory placed into CMA
> >> should be movable, except for (short/long) term pinnings. In these
> >> cases, doing allocations smaller than 300M and splitting them up should
> >> be good enough to reduce the call frequency, no?
> > 
> > I should have written that. The 300M I mentioned is really minimum size.
> > In some scenraio, we need way bigger than 300M, up to several GB.
> > Furthermore, the demand would be increased in near future.
> 
> And what will the driver do with that data besides providing it to the
> device? Can it be mapped to user space? I think we really need more
> information / the actual user.
> 
> >>
> >>>
> >>> A usecase is device can set a exclusive CMA area up when system boots.
> >>> When device needs 4800 * order-4 pages, it could call this bulk against
> >>> of the area so that it could effectively be guaranteed to allocate
> >>> enough fast.
> >>
> >> Just wondering
> >>
> >> a) Why does it have to be fast?
> > 
> > That's because it's related to application latency, which ends up
> > user feel bad.
> 
> Okay, but in theory, your device-needs are very similar to
> application-needs, besides you requiring order-4 pages, correct? Similar
> to an application that starts up and pins 300M (or more), just with
> ordr-4 pages.

Yes.

> 
> I don't get quite yet why you need a range allocator for that. Because
> you intend to use CMA?

Yes, with CMA, it could be more guaranteed and fast enough with little
tweaking. Currently, CMA is too slow due to below IPI overheads.

1. set_migratetype_isolate does drain_all_pages for every pageblock.
2. __aloc_contig_migrate_range does migrate_prep
3. alloc_contig_range does lru_add_drain_all.

Thus, if we need to increase frequency of call as your suggestion,
the set up overhead is also scaling up depending on the size.
Such overhead makes sense if caller requests big contiguous memory
but it's too much for normal high-order allocations.

Maybe, we might optimize those call sites to reduce or/remove
frequency of those IPI call smarter way but that would need to deal
with success ratio vs fastness dance in the end.

Another concern to use existing cma API is it's trying to make
allocation successful at the cost of latency. For example, waiting
a page writeback.

That's the this new sematic API comes from for compromise since I believe
we need some way to separate original CMA alloc(biased to be guaranteed
but slower) with this new API(biased to be fast but less guaranteed).

Is there any idea to work without existing cma API tweaking?

> 
> > 
> >> b) Why does it need that many order-4 pages?
> > 
> > It's HW requirement. I couldn't say much about that.
> 
> Hm.
> 
> > 
> >> c) How dynamic is the device need at runtime?
> > 
> > Whenever the application launched. It depends on user's usage pattern.
> > 
> >> d) Would it be reasonable in your setup to mark a CMA region in a way
> >> such that it will never be used for other (movable) allocations,
> > 
> > I don't get your point. If we don't want the area to used up for
> > other movable allocation, why should we use it as CMA first?
> > It sounds like reserved memory and just wasted the memory.
> 
> Right, it's just very hard to get what you are trying to achieve without
> the actual user at hand.
> 
> For example, will the pages you allocate be movable? Does the device
> allow for that? If not, then the MOVABLE zone is usually not valid
> (similar to gigantic pages not being allocated from the MOVABLE zone).
> So your stuck with the NORMAL zone or CMA. Especially for the NORMAL
> zone, alloc_contig_range() is currently not prepared to properly handle
> sub-MAX_ORDER - 1 ranges. If any involved pageblock contains an
> unmovable page, the allcoation will fail (see pageblock isolation /
> has_unmovable_pages()). So CMA would be your only option.

Those page are not migrable so I agree here CMA would be only option.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux