Re: [PATCH v6] mm/pgmap: Use correct alignment when looking at first pfn from a region

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Nov 30, 2019 at 3:13 PM Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> wrote:
>
> On Wed, 25 Sep 2019 09:21:02 +0530 "Aneesh Kumar K.V" <aneesh.kumar@xxxxxxxxxxxxx> wrote:
>
> > Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> writes:
> >
> > > On Tue, 17 Sep 2019 21:01:29 +0530 "Aneesh Kumar K.V" <aneesh.kumar@xxxxxxxxxxxxx> wrote:
> > >
> > >> vmem_altmap_offset() adjust the section aligned base_pfn offset.
> > >> So we need to make sure we account for the same when computing base_pfn.
> > >>
> > >> ie, for altmap_valid case, our pfn_first should be:
> > >>
> > >> pfn_first = altmap->base_pfn + vmem_altmap_offset(altmap);
> > >
> > > What are the user-visible runtime effects of this change?
> >
> > This was found by code inspection. If the pmem region is not correctly
> > section aligned we can skip pfns while iterating device pfn using
> >       for_each_device_pfn(pfn, pgmap)
> >
> >
> > I still would want Dan to ack the change though.
> >
>
> Dan?
>
>
> From: "Aneesh Kumar K.V" <aneesh.kumar@xxxxxxxxxxxxx>
> Subject: mm/pgmap: use correct alignment when looking at first pfn from a region
>
> vmem_altmap_offset() adjusts the section aligned base_pfn offset.  So we
> need to make sure we account for the same when computing base_pfn.
>
> ie, for altmap_valid case, our pfn_first should be:
>
> pfn_first = altmap->base_pfn + vmem_altmap_offset(altmap);
>
> This was found by code inspection. If the pmem region is not correctly
> section aligned we can skip pfns while iterating device pfn using
>
>         for_each_device_pfn(pfn, pgmap)
>
> [akpm@xxxxxxxxxxxxxxxxxxxx: coding style fixes]
> Link: http://lkml.kernel.org/r/20190917153129.12905-1-aneesh.kumar@xxxxxxxxxxxxx
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@xxxxxxxxxxxxx>
> Cc: Ralph Campbell <rcampbell@xxxxxxxxxx>
> Cc: Dan Williams <dan.j.williams@xxxxxxxxx>
> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
> ---
>
>  mm/memremap.c |   12 ++++++++++--
>  1 file changed, 10 insertions(+), 2 deletions(-)
>
> --- a/mm/memremap.c~mm-pgmap-use-correct-alignment-when-looking-at-first-pfn-from-a-region
> +++ a/mm/memremap.c
> @@ -55,8 +55,16 @@ static void pgmap_array_delete(struct re
>
>  static unsigned long pfn_first(struct dev_pagemap *pgmap)
>  {
> -       return PHYS_PFN(pgmap->res.start) +
> -               vmem_altmap_offset(pgmap_altmap(pgmap));
> +       const struct resource *res = &pgmap->res;
> +       struct vmem_altmap *altmap = pgmap_altmap(pgmap);
> +       unsigned long pfn;
> +
> +       if (altmap)
> +               pfn = altmap->base_pfn + vmem_altmap_offset(altmap);
> +       else
> +               pfn = PHYS_PFN(res->start);

This would only be a problem if res->start is not subsection aligned.
Is that bug triggering in your case, or is this just inspection. Now
that the subsections can be assumed as the minimum mapping granularity
I'd rather see a cleanup  I'd rather cleanup the implementation to
eliminate altmap->base_pfn or at least assert that
PHYS_PFN(res->start) and altmap->base_pfn are always identical.

Otherwise ->base_pfn is supposed to be just a convenient way to recall
the bounds of the memory hotplug operation deeper in the vmemmap
setup.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux