On Thu, Jul 20, 2023 at 02:13:25PM +0200, Michal Hocko wrote: > On Wed 19-07-23 16:48:21, Ross Zwisler wrote: > > On Wed, Jul 19, 2023 at 08:14:48AM +0200, Michal Hocko wrote: > > > On Tue 18-07-23 16:01:06, Ross Zwisler wrote: > > > [...] > > > > I do think that we need to fix this collision between ZONE_MOVABLE and memmap > > > > allocations, because this issue essentially makes the movablecore= kernel > > > > command line parameter useless in many cases, as the ZONE_MOVABLE region it > > > > creates will often actually be unmovable. > > > > > > movablecore is kinda hack and I would be more inclined to get rid of it > > > rather than build more into it. Could you be more specific about your > > > use case? > > > > The problem that I'm trying to solve is that I'd like to be able to get kernel > > core dumps off machines (chromebooks) so that we can debug crashes. Because > > the memory used by the crash kernel ("crashkernel=" kernel command line > > option) is consumed the entire time the machine is booted, there is a strong > > motivation to keep the crash kernel as small and as simple as possible. To > > this end I'm trying to get away without SSD drivers, not having to worry about > > encryption on the SSDs, etc. > > > > So, the rough plan right now is: > > > > 1) During boot set aside some memory that won't contain kernel allocations. > > I'm trying to do this now with ZONE_MOVABLE, but I'm open to better ways. > > > > We set aside memory for a crash kernel & arm it so that the ZONE_MOVABLE > > region (or whatever non-kernel region) will be set aside as PMEM in the crash > > kernel. This is done with the memmap=nn[KMG]!ss[KMG] kernel command line > > parameter passed to the crash kernel. > > > > So, in my sample 4G VM system, I see: > > > > # lsmem --split ZONES --output-all > > RANGE SIZE STATE REMOVABLE BLOCK NODE ZONES > > 0x0000000000000000-0x0000000007ffffff 128M online yes 0 0 None > > 0x0000000008000000-0x00000000bfffffff 2.9G online yes 1-23 0 DMA32 > > 0x0000000100000000-0x000000012fffffff 768M online yes 32-37 0 Normal > > 0x0000000130000000-0x000000013fffffff 256M online yes 38-39 0 Movable > > > > Memory block size: 128M > > Total online memory: 4G > > Total offline memory: 0B > > > > so I'll pass "memmap=256M!0x130000000" to the crash kernel. > > > > 2) When we hit a kernel crash, we know (hope?) that the PMEM region we've set > > aside only contains user data, which we don't want to store anyway. We make a > > filesystem in there, and create a kernel crash dump using 'makedumpfile': > > > > mkfs.ext4 /dev/pmem0 > > mount /dev/pmem0 /mnt > > makedumpfile -c -d 31 /proc/vmcore /mnt/kdump > > > > We then set up the next full kernel boot to also have this same PMEM region, > > using the same memmap kernel parameter. We reboot back into a full kernel. > > Btw. How do you ensure that the address range doesn't get reinitialized > by POST? Do you rely on kexec boot here? I've been working under the assumption that I do need to do a full reboot (not just another kexec boot) so that the devices in the system (NICs, disks, etc) are all reinitialized and don't carry over bad state from the crash. I do know about the 'reset_devices' kernel command line parameter, but wasn't sure that would be enough. From looking around it seems like this is very driver + device dependent, so maybe I just need to test more. In any case, you're right, if we do a full reboot and go through POST, it's system dependent on whether BIOS/UEFI/Coreboot/etc will zero memory, and if it does this feature won't work unless we kexec to the 3rd kernel. I've also heard concerns around whether a full reboot will cause the memory controller to reinitialize and potentially cause memory bit flips or similar, though I haven't yet seen this myself. Has anyone seen such bit flips / memory corruption due to system reboot, or is this a non-issue in your experience? Lots to figure out, thanks for the help. :)