Re: [LSF/MM ATTEND] Requests to attend MM Summit 2018

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 01/29/2018 06:44 PM, Michal Hocko wrote:
> On Sun 28-01-18 18:22:01, Anshuman Khandual wrote:
> [...]
>> 1. Supporting hotplug memory as a CMA region
>>
>> There are situations where a platform identified specific PFN range
>> can only be used for some low level debug/tracing purpose. The same
>> PFN range must be shared between multiple guests on a need basis,
>> hence its logical to expect the range to be hot add/removable in
>> each guest. But once available and online in the guest, it would
>> require a sort of guarantee of a large order allocation (almost the
>> entire range) into the memory to use it for aforesaid purpose.
>> Plugging the memory as ZONE_MOVABLE with MIGRATE_CMA makes sense in
>> this scenario but its not supported at the moment.
> 
> Isn't Joonsoo's[1] work doing exactly this?
> 
> [1] http://lkml.kernel.org/r/1512114786-5085-1-git-send-email-iamjoonsoo.kim@xxxxxxx
> 
> Anyway, declaring CMA regions to the hotplugable memory sounds like a
> misconfiguration. Unless I've missed anything CMA memory is not
> migratable and it is far from trivial to change that.

Right, its far from trivial but I think worth considering given
the benefit of being able to allocate large contig range on it.
 
> 
>> This basically extends the idea of relaxing CMA reservation and
>> declaration restrictions as pointed by Mike Kravetz.
>>
>> 2. Adding NUMA
>>
>> Adding NUMA tracking information to individual CMA areas and use it
>> for alloc_cma() interface. In POWER8 KVM implementation, guest HPT
>> (Hash Page Table) is allocated from a predefined CMA region. NUMA
>> aligned allocation for HPT for any given guest VM can help improve
>> performance.
> 
> With CMA using ZONE_MOVABLE this should be rather straightforward. We
> just need a way to distribute CMA regions over nodes and make the core
> CMA allocator to fallback between nodes in a the nodlist order.

Right, something like that.

>  
>> 3. Reducing CMA allocation failures
>>
>> CMA allocation failures are primarily because of not being unable to
>> isolate or migrate the given PFN range (Inside alloc_contig_range).
>> Is there a way to reduce the failure chances ?
>>
>> D. MAP_CONTIG (Mike Kravetz, Laura Abbott, Michal Hocko)
>>
>> I understand that a recent RFC from Mike Kravetz got debated but without
>> any conclusion about the viability to add MAP_CONTIG option for the user
>> space to request large contiguous physical memory.
> 
> The conclusion was pretty clear AFAIR. Our allocator simply cannot
> handle arbitrary sized large allocations so MAP_CONTIG is really hard to
> provide to the userspace. If there are drivers (RDMA I suspect) which
> would benefit from large allocations then they should use a custom mmap
> implementation which preallocates the memory.

Looking at the previous discussions (https://lkml.org/lkml/2017/10/3/992)
seems like though we have some concerns about this kind of feature which
makes future compaction hence kernel ability to alloc higher order pages
difficult, as pointed out by other folks, I would still believe that this
is something worth considering in long term (obviously after addressing
some of the concerns raised).

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux