>> 2. You do the kexec. The kexec kernel will only operate on a reserved >> memory region (reserved via e.g., kernel cmdline crashkernel=128M). > > I think you are merging the kexec and kdump behaviours. > (Wrong terminology? The things behind 'kexec -l Image' and 'kexec -p Image') Oh, I see - I think your example below clarifies things. Something like that should go in the cover letter if we end up in this patch being required :) (I missed that the problematic part is "random" addresses passed by user space to the kernel, where it wants data to be loaded to on kexec -e) > > For kdump, yes, the new kernel is loaded into the crashkernel reservation, and > confined to it. > > > For regular kexec, the new kernel can be loaded any where in memory. There might > be a difference with how this works on arm64.... > > The regular kexec kernel isn't stored in its final location when its loaded, its > relocated there when the image is executed. The target/destination memory may > have been removed in the meantime. > > (an example recipe below should clarify this) > > >> Is it that in 2., the reserved memory region (for the crashkernel) could >> have been offlined in the meantime? > > No, for kdump: the crashkernel reservation is PG_reserved, and its not something > mm knows how to move, so that region can't be taken offline. > > (On arm64 we additionally prevent the boot-memory from being removed as it is > all described as present by UEFI. The crashkernel reservation would always be > from this type of memory) Right. > > > This is about a regular kexec, any crashdump reservation is irrelevant. > This kexec kernel is temporarily stored out of line, then relocated when executed. > > A recipe so that we're at least on the same terminal! This is on a TX2 running > arm64's for-next/core using Qemu-TCG to emulate x86. (Sorry for the bizarre > config, its because Qemu supports hotremove on x86, but not yet on arm64). > > > Insert the memory: > (qemu) object_add memory-backend-ram,id=mem1,size=1G > (qemu) device_add pc-dimm,id=dimm1,memdev=mem1 > > | root@vm:~# free -m > | total used free shared ... > | Mem: 918 52 814 0 ... > | Swap: 0 0 0 > > > Bring it online: > | root@vm:~# cd /sys/devices/system/memory/ > | root@vm:/sys/devices/system/memory# for F in memory3*; do echo \ > | online_movable > $F/state; done > > | Built 1 zonelists, mobility grouping on. Total pages: 251049 > | Policy zone: DMA32 > > | -bash: echo: write error: Invalid argument > | root@vm:/sys/devices/system/memory# free -m > | total used free shared ... > | Mem: 1942 53 1836 0 ... > | Swap: 0 0 0 > > > Load kexec: > | root@vm:/sys/devices/system/memory# kexec -l /root/bzImage --reuse-cmdline > I assume this will trigger kexec_load -> do_kexec_load -> kimage_load_segment -> kimage_load_normal_segment -> kimage_alloc_page -> kimage_alloc_pages Which will just allocate a bunch of pages and mark them reserved. Now, AFAIKs, all allocations will be unmovable. So none of the kexec segment allocations will actually end up on your DIMM (as it is onlined online_movable). So, the loaded image (with its segments) from user won't be problematic and not get placed on your DIMM. Now, the problematic part is (via man kexec_load) "mem and memsz specify a physical address range that is the target of the copy." So the place where the image will be "assembled" at when doing the reboot. Understood :) > Press the Attention button to request removal: > > (qemu) device_del dimm1 > > | Offlined Pages 32768 > | Offlined Pages 32768 > | Offlined Pages 32768 > | Offlined Pages 32768 > | Offlined Pages 32768 > | Offlined Pages 32768 > | Offlined Pages 32768 > | Offlined Pages 32768 > | Built 1 zonelists, mobility grouping on. Total pages: 233728 > | Policy zone: DMA32 > > The memory is gone: > | root@vm:/sys/devices/system/memory# free -m > | total used free shared ... > | Mem: 918 89 769 0 ... > | Swap: 0 0 0 > > Trigger kexec: > | root@vm:/sys/devices/system/memory# kexec -e > > [...] > > | sd 0:0:0:0: [sda] Synchronizing SCSI cache > | kexec_core: Starting new kernel > > ... and Qemu restarts the platform firmware instead of proceeding with kexec. > (I assume this is a triple fault) > > You can use mem-min and mem-max to control where kexec's user space will place > the memory. > > > If you apply this patch, the above sequence will fail at the device remove step, > as the physical addresses match the loaded kexec image: > > | Offlined Pages 32768 > | Offlined Pages 32768 > | Offlined Pages 32768 > | Offlined Pages 32768 > | Offlined Pages 32768 > | Offlined Pages 32768 > | Offlined Pages 32768 > | kexec_core: Memory region in use > | kexec_core: Memory region in use Okay, so I assume the kexec userspace tool provided target kernel addresses for segments that reside on the DIMM. > | memory memory39: Offline failed. > | Built 1 zonelists, mobility grouping on. Total pages: 299212 > | Policy zone: Normal > > | root@vm:/sys/devices/system/memory# free -m > | total used free shared ... > | Mem: 1942 90 1793 0 ... > | Swap: 0 0 0 > > I can't remove the DIMM, because we failed to offline it: I wonder if we should instead make the "kexec -e" fail. It tries to touch random system memory. Denying to offline MOVABLE memory should be avoided - and what kexec does here sounds dangerous to me (allowing it to write random system memory). Roughly what I am thinking is this: diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c index ba1d91e868ca..70c39a5307e5 100644 --- a/kernel/kexec_core.c +++ b/kernel/kexec_core.c @@ -1135,6 +1135,10 @@ int kernel_kexec(void) error = -EINVAL; goto Unlock; } + if (!kexec_image_validate()) { + error = -EINVAL; + goto Unlock; + } #ifdef CONFIG_KEXEC_JUMP if (kexec_image->preserve_context) { kexec_image_validate() would go over all segments and validate that the involved pages are actual valid memory (pfn_to_online_page()). All we have to do is protect from memory hotplug until we switch to the new kernel. Will probably need some thought. But it will actually also bail out when user space passes wrong physical memory addresses, instead of triple-faulting silently. -- Thanks, David / dhildenb _______________________________________________ kexec mailing list kexec@xxxxxxxxxxxxxxxxxxx http://lists.infradead.org/mailman/listinfo/kexec