On Tue, Jun 7, 2022 at 7:41 AM Alexander Dahl <ada@xxxxxxxxxxx> wrote: > Am Fri, Jun 03, 2022 at 08:11:31PM +0200 schrieb Arnd Bergmann: > > On Fri, Jun 3, 2022 at 7:29 PM Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote: > > > > I think this is a case of "patches welcome". Nobody has really needed > > this so far, but as even the smaller machines are slowly migrating from > > 32-bit to 64-bit cores, optimizing this will get interesting for more > > developers. There are probably other low-hanging > > fruit that you can address after figuring out. > > The SiP variants of at91 SAMA5D2 (armv7) or SAM9x60 (armv5) come with > 64 MiB or 128 MiB, and given the latter is a new SoC announced only > two or three years ago, requiring at least 256 MiB would be at best > unfortunate. Given those SoCs are used in industrial applications > with very long support times, I think 32bit ARM will stay for years, > even with new products. Yes, of course, and there is nothing wrong with that. We already see Cortex-A7 cores down to 7nm, all running Linux, and I expect there will likely be another 5 to 10 years of new 32-bit chips, and then another 10 years of people putting the existing chips into production, and after that a slow decline of users updating their kernels before supporting 32-bit hardware becomes too expensive to support in the kernel. On hardware that can run both 32-bit and 64-bit kernels, there are pretty much only upsides to running 64-bit kernels (with 32-bit thumb user space), but there is a memory overhead for the kernel itself, usually some 20 to 30 MB. Reducing this difference can hopefully help get fewer users stuck on 32-bit kernels by the time that they get too painful to use. > > One observation I made is that modern kernels don't seem to deal as > > well as older ones with low-memory situations, so even if you manage > > to free up most of your 32MB, it might still not work reliably. > > Since we are using the SAMA5D27C-D5M in production, I would also be > interested in support for running 32 bit ARM with recent kernels on > systems with 64 MiB or even 32 MiB of memory. If there are specific > things to test, you can let me know. I don't have anything specific, just the general feeling that there is something wrong about memory reclaim in smaller configurations. A system that is fine after bootup can run for a long time without running out of memory, but one thing I've observed in the past is that after a process manages to consume all RAM and swap space, killing that one task does not restore the overall system back into the same state as before, and it remains sluggish. Another issue is that I think we have more broken error handling for failed in-kernel memory allocations. After you see the kernel fail an allocation, I would usually recommend a reboot, and my feeling is that this used to be better in the past. Both of these could of course just be side-effects of kernel bloat, where a particular workload is now worse than in the past because it now needs more RAM to do the same thing. Arnd