Re: [RFC 0/7] Support high-order page bulk allocation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Excerpts from Minchan Kim's message of August 18, 2020 9:34 am:
> On Mon, Aug 17, 2020 at 06:44:50PM +0200, David Hildenbrand wrote:
>> On 17.08.20 18:30, Minchan Kim wrote:
>> > On Mon, Aug 17, 2020 at 05:45:59PM +0200, David Hildenbrand wrote:
>> >> On 17.08.20 17:27, Minchan Kim wrote:
>> >>> On Sun, Aug 16, 2020 at 02:31:22PM +0200, David Hildenbrand wrote:
>> >>>> On 14.08.20 19:31, Minchan Kim wrote:
>> >>>>> There is a need for special HW to require bulk allocation of
>> >>>>> high-order pages. For example, 4800 * order-4 pages.
>> >>>>>
>> >>>>> To meet the requirement, a option is using CMA area because
>> >>>>> page allocator with compaction under memory pressure is
>> >>>>> easily failed to meet the requirement and too slow for 4800
>> >>>>> times. However, CMA has also the following drawbacks:
>> >>>>>
>> >>>>>  * 4800 of order-4 * cma_alloc is too slow
>> >>>>>
>> >>>>> To avoid the slowness, we could try to allocate 300M contiguous
>> >>>>> memory once and then split them into order-4 chunks.
>> >>>>> The problem of this approach is CMA allocation fails one of the
>> >>>>> pages in those range couldn't migrate out, which happens easily
>> >>>>> with fs write under memory pressure.
>> >>>>
>> >>>> Why not chose a value in between? Like try to allocate MAX_ORDER - 1
>> >>>> chunks and split them. That would already heavily reduce the call frequency.
>> >>>
>> >>> I think you meant this:
>> >>>
>> >>>     alloc_pages(GFP_KERNEL|__GFP_NOWARN, MAX_ORDER - 1)
>> >>>
>> >>> It would work if system has lots of non-fragmented free memory.
>> >>> However, once they are fragmented, it doesn't work. That's why we have
>> >>> seen even order-4 allocation failure in the field easily and that's why
>> >>> CMA was there.
>> >>>
>> >>> CMA has more logics to isolate the memory during allocation/freeing as
>> >>> well as fragmentation avoidance so that it has less chance to be stealed
>> >>> from others and increase high success ratio. That's why I want this API
>> >>> to be used with CMA or movable zone.
>> >>
>> >> I was talking about doing MAX_ORDER - 1 CMA allocations instead of one
>> >> big 300M allocation. As you correctly note, memory placed into CMA
>> >> should be movable, except for (short/long) term pinnings. In these
>> >> cases, doing allocations smaller than 300M and splitting them up should
>> >> be good enough to reduce the call frequency, no?
>> > 
>> > I should have written that. The 300M I mentioned is really minimum size.
>> > In some scenraio, we need way bigger than 300M, up to several GB.
>> > Furthermore, the demand would be increased in near future.
>> 
>> And what will the driver do with that data besides providing it to the
>> device? Can it be mapped to user space? I think we really need more
>> information / the actual user.
>> 
>> >>
>> >>>
>> >>> A usecase is device can set a exclusive CMA area up when system boots.
>> >>> When device needs 4800 * order-4 pages, it could call this bulk against
>> >>> of the area so that it could effectively be guaranteed to allocate
>> >>> enough fast.
>> >>
>> >> Just wondering
>> >>
>> >> a) Why does it have to be fast?
>> > 
>> > That's because it's related to application latency, which ends up
>> > user feel bad.
>> 
>> Okay, but in theory, your device-needs are very similar to
>> application-needs, besides you requiring order-4 pages, correct? Similar
>> to an application that starts up and pins 300M (or more), just with
>> ordr-4 pages.
> 
> Yes.

Linux has never seriously catered for broken devices that require
large contiguous physical ranges to perform well.

The problem with doing this is it allows hardware designers to get
progressively lazier and foist more of their work onto us, and then
we'd be stuck with it.

I think you need to provide a lot better justification than this, and
probably should just solve it with some hack like allocating larger
pages or pre-allocating some of that CMA space before the user opens
the device, or require application to use hugetlbfs.

Thanks,
Nick





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux