Re: + mm-hugetlb-make-alloc_gigantic_page-available-for-general-use.patch added to -mm tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/15/2019 04:54 PM, Matthew Wilcox wrote:
> On Tue, Oct 15, 2019 at 03:00:49PM +0530, Anshuman Khandual wrote:
>> On 10/15/2019 01:59 AM, Matthew Wilcox wrote:
>>> On Mon, Oct 14, 2019 at 02:17:30PM +0200, Michal Hocko wrote:
>>>> On Fri 11-10-19 13:29:32, Andrew Morton wrote:
>>>>> alloc_gigantic_page() implements an allocation method where it scans over
>>>>> various zones looking for a large contiguous memory block which could not
>>>>> have been allocated through the buddy allocator.  A subsequent patch which
>>>>> tests arch page table helpers needs such a method to allocate PUD_SIZE
>>>>> sized memory block.  In the future such methods might have other use cases
>>>>> as well.  So alloc_gigantic_page() has been split carving out actual
>>>>> memory allocation method and made available via new
>>>>> alloc_gigantic_page_order().
>>>>
>>>> You are exporting a helper used for hugetlb internally. Is this really
>>>> what is needed? I haven't followed this patchset but don't you simply
>>>> need a generic 1GB allocator? If yes then you should be looking at
>>>> alloc_contig_range.
>>>
>>> He actually doesn't need to allocate any memory at all.  All he needs is
>>> the address of a valid contiguous PUD-sized chunk of memory.
>>>
>>
>> We had already discussed about the benefits of allocated memory over
>> synthetic pfn potentially derived from a kernel text symbol. More
>> over we are not adding any new helper or new code for this purpose,
>> but instead just trying to reuse code which is already present.
> 
> Yes, and I'm pretty sure you're just wrong.  But I don't know that you're
> wrong for all architectures.  Re-reading that, I'm still not sure you
> understood what I was suggesting, so I'll say it again differently.

Sure, really appreciate that.

> 
> Look up a kernel symbol, something like kernel_init().  This will
> have a virtual address upon which it is safe to call virt_to_pfn().
> Let's presume it's in PFN 0x12345678.  Use 0x12345678 as the PFN for
> your PTE level tests.

Got it.

> 
> Then clear the bottom (eg) 9 bits from it and use 0x1234400 for your PMD
> level tests.  Then clear the bottom 18 bits from it and use 0x12300000
> for your PUD level tests.

Got it.

> 
> I don't think it matters whether there's physical memory at PFN 0x12300000
> or not.  The various p?d_* functions should work as long as the PFN is
> in some plausible range.

A quick check confirms that p?d_* functions work on those pfns.

Just for my understanding. Where these checks could happen which verifies
that any mapped pfn on an entry falls under a plausible range or not ? Could
this check be platform specific ? Could there be any problem for these pfns
not to test positive with pfn_valid(). Though I am not sure if these could
ever cause any problem, just wondering.

One of the other reason for explicit memory allocation was isolation. By
using pfns as explained earlier, even though the test is transient it might
map memory which could be used simultaneously some where else. Though it
might never cause any problem (this being a test page table not walked by
MMU) but is not breaking test's resource isolation a valid concern ? Just
thinking.

As mentioned earlier, I can definitely see benefits of using pfn approach.

> 
> I gave up arguing because you seemed uninterested in this approach,
> but now that Michal is pointing out that your approach is all wrong,
> maybe you'll listen.
> 

I had assumed that the previous discussion just remained inconclusive
but fair enough, will try it out on x86 and arm64 platforms.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux