Re: [PATCH RFC 1/9] memremap: add ZONE_DEVICE support for compound pages

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2/20/21 1:43 AM, Dan Williams wrote:
> On Tue, Dec 8, 2020 at 9:59 PM John Hubbard <jhubbard@xxxxxxxxxx> wrote:
>> On 12/8/20 9:28 AM, Joao Martins wrote:
>>> diff --git a/mm/memremap.c b/mm/memremap.c
>>> index 16b2fb482da1..287a24b7a65a 100644
>>> --- a/mm/memremap.c
>>> +++ b/mm/memremap.c
>>> @@ -277,8 +277,12 @@ static int pagemap_range(struct dev_pagemap *pgmap, struct mhp_params *params,
>>>       memmap_init_zone_device(&NODE_DATA(nid)->node_zones[ZONE_DEVICE],
>>>                               PHYS_PFN(range->start),
>>>                               PHYS_PFN(range_len(range)), pgmap);
>>> -     percpu_ref_get_many(pgmap->ref, pfn_end(pgmap, range_id)
>>> -                     - pfn_first(pgmap, range_id));
>>> +     if (pgmap->flags & PGMAP_COMPOUND)
>>> +             percpu_ref_get_many(pgmap->ref, (pfn_end(pgmap, range_id)
>>> +                     - pfn_first(pgmap, range_id)) / PHYS_PFN(pgmap->align));
>>
>> Is there some reason that we cannot use range_len(), instead of pfn_end() minus
>> pfn_first()? (Yes, this more about the pre-existing code than about your change.)
>>
>> And if not, then why are the nearby range_len() uses OK? I realize that range_len()
>> is simpler and skips a case, but it's not clear that it's required here. But I'm
>> new to this area so be warned. :)
> 
> There's a subtle distinction between the range that was passed in and
> the pfns that are activated inside of it. See the offset trickery in
> pfn_first().
> 
>> Also, dividing by PHYS_PFN() feels quite misleading: that function does what you
>> happen to want, but is not named accordingly. Can you use or create something
>> more accurately named? Like "number of pages in this large page"?
> 
> It's not the number of pages in a large page it's converting bytes to
> pages. Other place in the kernel write it as (x >> PAGE_SHIFT), but my
> though process was if I'm going to add () might as well use a macro
> that already does this.
> 
> That said I think this calculation is broken precisely because
> pfn_first() makes the result unaligned.
> 
> Rather than fix the unaligned pfn_first() problem I would use this
> support as an opportunity to revisit the option of storing pages in
> the vmem_altmap reserve soace. The altmap's whole reason for existence
> was that 1.5% of large PMEM might completely swamp DRAM. However if
> that overhead is reduced by an order (or orders) of magnitude the
> primary need for vmem_altmap vanishes.
> 
> Now, we'll still need to keep it around for the ->align == PAGE_SIZE
> case, but for most part existing deployments that are specifying page
> map on PMEM and an align > PAGE_SIZE can instead just transparently be
> upgraded to page map on a smaller amount of DRAM.
> 
I feel the altmap is still relevant. Even with the struct page reuse for
tail pages, the overhead for 2M align is still non-negligeble i.e. 4G per
1Tb (strictly speaking about what's stored in the altmap). Muchun and
Matthew were thinking (in another thread) on compound_head() adjustments
that probably can make this overhead go to 2G (if we learn to differentiate
the reused head page from the real head page). But even there it's still
2G per 1Tb. 1G pages, though, have a better story to remove altmap need.

One thing to point out about altmap is that the degradation (in pinning and
unpining) we observed with struct page's in device memory, is no longer observed
once 1) we batch ref count updates as we move to compound pages 2) reusing
tail pages seems to lead to these struct pages staying more likely in cache
which perhaps contributes to dirtying a lot less cachelines.

	Joao




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux