On Tue, Nov 28, 2023 at 01:51:32PM +0300, Serge Semin wrote: > On Tue, Nov 28, 2023 at 09:13:39AM +0200, Mike Rapoport wrote: > > On Fri, Nov 24, 2023 at 02:18:44PM +0300, Serge Semin wrote: > > > Do you mind posting your physical memory layout? > > I actually already did in response to the last part of your previous > message. You must have missed it. Here is the copy of the message: Sorry, for some reason I didn't scroll down your previous mail :) > > On Fri, Nov 24, 2023 at 02:18:44PM +0300, Serge Semin wrote: > > > On Fri, Nov 24, 2023 at 10:19:00AM +0200, Mike Rapoport wrote: > > > ... > > > > > > > > My guess is that your system has a hole in the physical memory mappings and > > > > with FLATMEM that hole will have essentially unused struct pages, which are > > > > initialized by init_unavailable_range(). But from mm perspective this is > > > > still a hole even though there's some MMIO ranges in that hole. > > > > > > Absolutely right. Here is the physical memory layout in my system. > > > 0 - 128MB: RAM > > > 128MB - 512MB: Memory mapped IO > > > 512MB - 768MB..8.256GB: RAM > > > > > > > > > > > Now, if that hole is large you are wasting memory for unused memory map and > > > > it maybe worth considering using SPARSEMEM. > > > > > > Do you think it's worth to move to the sparse memory configuration in > > > order to save the 384MB of mapping with the 16K page model? AFAIU flat > > > memory config is more performant. Performance is critical on the most > > > of the SoC applications especially when using the 10G ethernet or > > > the high-speed PCIe devices. > > Could you also answer to my question above regarding using the > sparsemem instead on my hw memory layout? Currently MIPS defines section size to 256MB, so with your memory layout with SPARSMEM there will be two sections of 256MB, at 0 and at 512MB, so you'll save memory map for 256M which is roughly 1M with 16k pages. It's possible With SPARSEMEM the pfn_to_page() and page_to_pfn() are a bit longer in terms of assembly instructions, but I really doubt you'll notice any performance difference in real world applications. > > With FLATMEM the memory map exists for that > > hole and hence pfn_valid() returns 1 for the MMIO range as well. That makes > > __update_cache() to check folio state and that check would fail if the memory > > map contained garbage. But since the hole in the memory map is initialized > > with init_unavailable_range() you get a valid struct page/struct folio and > > everything is fine. > > Right. That's what currently happens on MIPS32 and that's what I had > to fix in the framework of this series by the next patch: > Link: https://lore.kernel.org/linux-mips/20231122182419.30633-4-fancer.lancer@xxxxxxxxx/ > flatmem version of the pfn_valid() method has been broken due to > max_mapnr being uninitialized before mem_init() is called. So > init_unavailable_range() didn't initialize the pages on the early > bootup stage. Thus afterwards, when max_mapnr has finally got a valid > value any attempts to call the __update_cache() method on the MMIO > memory hole caused the unaligned access crash. The fix for max_mapnr makes pfn_valid()==1 for the entire memory map and this fixes up the struct pages in the hole. > > > > With that, the init_unavailable_range() docs need not mention IO space at > > all, they should mention holes within FLATMEM memory map. > > Ok. I'll resend the patch with mentioning flatmem holes instead of > mentioning the IO-spaces. > > > > > As for SPARSEMEM, if the hole does not belong to any section, pfn_valid() > > will be false for it and __update_cache() won't try to access memory map. > > Ah, I see. In case of the SPARSEMEM config an another version of > pfn_valid() will be called. It's defined in the include/linux/mmzone.h > header file. Right? If so then no problem there indeed. Yes, SPARSMEM uses pfn_valid() defined in include/linux/mmzone.h > -Serge(y) -- Sincerely yours, Mike.