Re: [PATCH v2 2/9] mm/vmstat: show start_pfn when zone spans pages

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 01.10.22 03:28, Doug Berger wrote:
On 9/29/2022 1:15 AM, David Hildenbrand wrote:
On 29.09.22 00:32, Doug Berger wrote:
A zone that overlaps with another zone may span a range of pages
that are not present. In this case, displaying the start_pfn of
the zone allows the zone page range to be identified.


I don't understand the intention here.

"/* If unpopulated, no other information is useful */"

Why would the start pfn be of any use here?

What is the user visible impact without that change?
Yes, this is very subtle. I only caught it while testing some
pathological cases.

If you take the example system:
The 7278 device has four ARMv8 CPU cores in an SMP cluster and two
memory controllers (MEMCs). Each MEMC is capable of controlling up to
8GB of DRAM. An example 7278 system might have 1GB on each controller,
so an arm64 kernel might see 1GB on MEMC0 at 0x40000000-0x7FFFFFFF and
1GB on MEMC1 at 0x300000000-0x33FFFFFFF.


Okay, thanks. You should make it clearer in the patch description -- especially how this relates to DMB. Having that said, I still have to digest your examples:

Placing a DMB on MEMC0 with 'movablecore=256M@0x70000000' will lead to
the ZONE_MOVABLE zone spanning from 0x70000000-0x33fffffff and the
ZONE_NORMAL zone spanning from 0x300000000-0x33fffffff.

Why is ZONE_MOVABLE spanning more than 256M? It should span

0x70000000-0x80000000

Or what am I missing?


If instead you specified 'movablecore=256M@0x70000000,512M' you would
get the same ZONE_MOVABLE span, but the ZONE_NORMAL would now span
0x300000000-0x32fffffff. The requested 512M of movablecore would be
divided into a 256MB DMB at 0x70000000 and a 256MB "classic" movable
zone start would be displayed in the bootlog as:
[    0.000000] Movable zone start for each node
[    0.000000]   Node 0: 0x000000330000000


Okay, so that's the movable zone range excluding DMB.


Finally, if you specified the pathological
'movablecore=256M@0x70000000,1G@12G' you would still have the same
ZONE_MOVABLE span, and the ZONE_NORMAL span would go back to
0x300000000-0x33fffffff. However, because the second DMB (1G@12G)
completely overlaps the ZONE_NORMAL there would be no pages present in
ZONE_NORMAL and /proc/zoneinfo would report ZONE_NORMAL 'spanned
262144', but not where those pages are. This commit adds the 'start_pfn'
back to the /proc/zoneinfo for ZONE_NORMAL so the span has context.

... but why? If there are no pages present, there is no ZONE_NORMAL we care about. The zone span should be 0. Does this maybe rather indicate that there is a zone span processing issue in your DMB implementation?

Special-casing zones based on DMBs feels wrong. But most probably I am missing something important :)

--
Thanks,

David / dhildenb




[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux