Re: + numa-improve-the-efficiency-of-calculating-pages-loss.patch added to mm-unstable branch

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Oct 16, 2023 at 10:11:11PM +0800, Liam Ni wrote:
> On Thu, 12 Oct 2023 at 17:14, Mike Rapoport <rppt@xxxxxxxxxx> wrote:
> >
> > On Mon, Oct 09, 2023 at 05:52:59PM -0700, Andrew Morton wrote:
> > >
> > > The patch titled
> > >      Subject: NUMA: improve the efficiency of calculating pages loss
> >
> > We don't calculate the lost pages here, but pages with no NUMA node
> > assigned. How about
> >
> > NUMA: optimize detection of memory with no node id assigned by firmware
> >
> thanks, i will send patch v5.
> 
> 
> > >  arch/x86/mm/numa.c       |   33 +--------------------------------
> >
> > arch/loongarch/kernel/numa.c copied the same check from x86, it should be
> > updated as well.
> 
> In the previous version(patch v3), I submitted a patch to loongarch,
> but there was no response.
>  How about submitting the patch to loongarch after the patch v5 is
> merged into the mainline?

It's fine to include loongarch changes in v5.
 
> >
> > >  include/linux/memblock.h |    1 +
> > >  mm/memblock.c            |   21 +++++++++++++++++++++
> > >  3 files changed, 23 insertions(+), 32 deletions(-)
> > >
> > > --- a/arch/x86/mm/numa.c~numa-improve-the-efficiency-of-calculating-pages-loss
> > > +++ a/arch/x86/mm/numa.c
> > > @@ -448,37 +448,6 @@ int __node_distance(int from, int to)
> > >  EXPORT_SYMBOL(__node_distance);
> > >
> > >  /*
> > > - * Sanity check to catch more bad NUMA configurations (they are amazingly
> > > - * common).  Make sure the nodes cover all memory.
> > > - */
> > > -static bool __init numa_meminfo_cover_memory(const struct numa_meminfo *mi)
> > > -{
> > > -     u64 numaram, e820ram;
> > > -     int i;
> > > -
> > > -     numaram = 0;
> > > -     for (i = 0; i < mi->nr_blks; i++) {
> > > -             u64 s = mi->blk[i].start >> PAGE_SHIFT;
> > > -             u64 e = mi->blk[i].end >> PAGE_SHIFT;
> > > -             numaram += e - s;
> > > -             numaram -= __absent_pages_in_range(mi->blk[i].nid, s, e);
> > > -             if ((s64)numaram < 0)
> > > -                     numaram = 0;
> > > -     }
> > > -
> > > -     e820ram = max_pfn - absent_pages_in_range(0, max_pfn);
> > > -
> > > -     /* We seem to lose 3 pages somewhere. Allow 1M of slack. */
> > > -     if ((s64)(e820ram - numaram) >= (1 << (20 - PAGE_SHIFT))) {
> > > -             printk(KERN_ERR "NUMA: nodes only cover %LuMB of your %LuMB e820 RAM. Not used.\n",
> > > -                    (numaram << PAGE_SHIFT) >> 20,
> > > -                    (e820ram << PAGE_SHIFT) >> 20);
> > > -             return false;
> > > -     }
> > > -     return true;
> > > -}
> > > -
> > > -/*
> > >   * Mark all currently memblock-reserved physical memory (which covers the
> > >   * kernel's own memory ranges) as hot-unswappable.
> > >   */
> > > @@ -583,7 +552,7 @@ static int __init numa_register_memblks(
> > >                       return -EINVAL;
> > >               }
> > >       }
> > > -     if (!numa_meminfo_cover_memory(mi))
> > > +     if (!memblock_validate_numa_coverage(SZ_1M))
> > >               return -EINVAL;
> > >
> > >       /* Finally register nodes. */
> > > --- a/include/linux/memblock.h~numa-improve-the-efficiency-of-calculating-pages-loss
> > > +++ a/include/linux/memblock.h
> > > @@ -123,6 +123,7 @@ int memblock_physmem_add(phys_addr_t bas
> > >  void memblock_trim_memory(phys_addr_t align);
> > >  bool memblock_overlaps_region(struct memblock_type *type,
> > >                             phys_addr_t base, phys_addr_t size);
> > > +bool memblock_validate_numa_coverage(const u64 limit);
> > >  int memblock_mark_hotplug(phys_addr_t base, phys_addr_t size);
> > >  int memblock_clear_hotplug(phys_addr_t base, phys_addr_t size);
> > >  int memblock_mark_mirror(phys_addr_t base, phys_addr_t size);
> > > --- a/mm/memblock.c~numa-improve-the-efficiency-of-calculating-pages-loss
> > > +++ a/mm/memblock.c
> > > @@ -734,6 +734,27 @@ int __init_memblock memblock_add(phys_ad
> > >       return memblock_add_range(&memblock.memory, base, size, MAX_NUMNODES, 0);
> > >  }
> > >
> >
> > Please add kernel-doc description.
> >
> > > +bool __init_memblock memblock_validate_numa_coverage(const u64 limit)
> >
> > I think threshold is better name than limit here.
> >
> > > +{
> > > +     unsigned long lose_pg = 0;
> >
> > The pages we count are not lost, they just don't have node id assigned.
> > I'm inclined to use plain nr_pages rather that try to invent descriptive
> > but yet short name here.
> >
> > > +     unsigned long start_pfn, end_pfn;
> > > +     int nid, i;
> > > +
> > > +     /* calculate lose page */
> > > +     for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, &nid) {
> > > +             if (nid == NUMA_NO_NODE)
> > > +                     lose_pg += end_pfn - start_pfn;
> > > +     }
> > > +
> > > +     if (lose_pg >= limit) {
> >
> > The caller defines the limit in bytes, and here you compare it with pages.
> >
> > > +             pr_err("NUMA: We lost %ld pages.\n", lose_pg);
> >
> > I believe a better message would be:
> >
> >                 mem_size_mb = memblock_phys_mem_size() >> 20;
> >                 pr_err("NUMA: no nodes coverage for %luMB of %luMB RAM\n",
> >                        (nr_pages << PAGE_SHIFT) >> 20, mem_size_mb);
> >
> >
> > > +             return false;
> > > +     }
> > > +
> > > +     return true;
> > > +}
> > > +
> > > +
> > >  /**
> > >   * memblock_isolate_range - isolate given range into disjoint memblocks
> > >   * @type: memblock type to isolate range for
> > > _
> > >
> > > Patches currently in -mm which might be from zhiguangni01@xxxxxxxxx are
> > >
> > > numa-improve-the-efficiency-of-calculating-pages-loss.patch
> > >
> >
> > --
> > Sincerely yours,
> > Mike.

-- 
Sincerely yours,
Mike.



[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux