Re: [PATCH v2 0/3] m68k/mm: switch from DISCONTIGMEM to SPARSEMEM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Geert,

On Fri, Jul 12, 2019 at 04:28:26PM +0200, Geert Uytterhoeven wrote:
Hi Mike,

On Fri, Jul 5, 2019 at 9:25 PM Mike Rapoport <rppt@xxxxxxxxxxxxx> wrote:
On Mon, Jul 01, 2019 at 01:56:25PM +0200, Geert Uytterhoeven wrote:
On Sun, Jun 30, 2019 at 10:58 AM Mike Rapoport <rppt@xxxxxxxxxxxxx> wrote:
On Sun, Jun 30, 2019 at 12:54:49PM +1200, Michael Schmitz wrote:
Am 29.06.2019 um 23:30 schrieb Mike Rapoport:
On Thu, Jun 20, 2019 at 06:46:28PM +0200, Geert Uytterhoeven wrote:
On Wed, Jun 19, 2019 at 4:18 PM Mike Rapoport <rppt@xxxxxxxxxxxxx> wrote:
On Wed, Jun 19, 2019 at 09:39:40AM +0200, Geert Uytterhoeven wrote:
On Wed, Jun 19, 2019 at 9:06 AM Geert Uytterhoeven <geert@xxxxxxxxxxxxxx> wrote:
On Tue, Jun 18, 2019 at 8:10 AM Mike Rapoport <rppt@xxxxxxxxxxxxx> wrote:

Thanks, that hack did fix CONFIG_SINGLE_MEMORY_CHUNK=y.

Back to sparsemem...

With CONFIG_SINGLE_MEMORY_CHUNK=n, and CONFIG_SPARSEMEM=y,
it also fails.  Diff between working single memory chunk and failing
sparsemem:

  -Memory: 7796K/12288K available (2555K kernel code, 259K rwdata,
700K rodata, 136K init, 153K bss, 4492K reserved, 0K cma-reserved)
  +Memory: 7816K/131072K available (2556K kernel code, 261K rwdata,
700K rodata, 136K init, 157K bss, 123256K reserved, 0K cma-reserved)

Oops, looks like it thinks there's memory from 0x00000000-0x08000000,
instead of 0x07400000-0x08000000.

Yeah, I've made a mistake in the zone/hole size calculations as it seems.

Can you try this:

diff --git a/arch/m68k/mm/motorola.c b/arch/m68k/mm/motorola.c
index 87d09942be5c..bf438a0da173 100644
--- a/arch/m68k/mm/motorola.c
+++ b/arch/m68k/mm/motorola.c
@@ -229,10 +229,8 @@ static void m68k_free_area_init(unsigned long max_addr)
    memblock_set_bottom_up(true);

    zones_size[ZONE_DMA] = max_addr >> PAGE_SHIFT;
-    if (m68k_num_memory > 1) {
-            holes_size = max_addr - memblock_phys_mem_size();
-            zholes_size[ZONE_DMA] = holes_size >> PAGE_SHIFT;
-    }
+    holes_size = max_addr - memblock_phys_mem_size();
+    zholes_size[ZONE_DMA] =  >> PAGE_SHIFT;

gcc fails to parse that line. Lost a holes_size, perhaps?

Oops, indeed.

Thanks, we're getting progress.  My system is booting again from hard drive
with CONFIG_SINGLE_MEMORY_CHUNK=n.
When booting from an initrd, it still crashes, see attached dmesg.

The initrd memory is reserved after the memmap is allocated and the initrd
gets overwritten. Can you try this hack please:

[...]

Thanks, will try after my holidays.

Hence still to fix:
  1. Proper solution for CONFIG_SINGLE_MEMORY_CHUNK=y,
  2. CONFIG_SINGLE_MEMORY_CHUNK=n and initrd.

Even with those fixes I'm still concerned about the SECTION_SIZE_BITS and
MAX_PHYSMEM_BITS definitions.

Without implementing vmemmap support we are limited in their maximal
difference by 8 bits. That means that either minimal section size would be
16M or the maximal physical memory size would be limited to 1G. I'm not
that familiar with m68k machine variants to say if either of these
assumptions can be used.

While an Amiga could, in theory, have ca. 3.8 GiB of RAM, in practice
it's limited to 1 GiB, but most machines have only a fraction of that.
AFAIK, other m68k machines are similar.  So a limit of 1 GiB sounds fine
to me.

But what's the impact of the minimal section size? What does it really
mean?
As my A4000 has 12 MiB of RAM at 0x7400000, and that seems to work,
it does not mean that address and size must be a multiple of 16 MiB?

Memory configuration varies wildly among machines.
IIRC, some Macs can have several discontiguous 1 MiB blocks.

Each section has a contiguous memory map for [section_start, section_end).
The section_start is SECTION_SIZE * section_nr.
The section_end is either SECTION_SIZE * (section_nr + 1) if the entire
section is populated or the end address of the memory chunk belonging to
that section.

For instance, with SECTION_SIZE of 16MiB your A4000 would have
section_start at 0x7400000 and section_end at 0x8000000.
If we were using, say, 8MiB sections, it would have two sections populated:
[0x7400000, 0x7c00000), [0x7c00000, 0x8000000].

The issue with having section size too big is that for machines that have
small chunks of discontiguous memory separated by less than SECTION_SIZE,
these chunks will map to the same section and this will cause creation of
unused memmap for the hole between those chunks.

E.g. with two chunks of 1MiB located at 0 and 14MiB, we'll have a single
section spanning 15MiB with wasted memory map covering the hole between 1M
and 14MiB.
 
On the other hand, presuming we want MAX_PHYSMEM_BITS set to 32, making the
SECTION_SIZE smaller won't work because we are running out of space in the
page flags :(

Thanks!

Gr{oetje,eeting}s,

                        Geert

-- 
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@xxxxxxxxxxxxxx

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

-- 
Sincerely yours,
Mike.




[Index of Archives]     [Video for Linux]     [Yosemite News]     [Linux S/390]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux