On 10/25/24 at 05:11pm, David Hildenbrand wrote: > This is based on "[PATCH v3 0/7] virtio-mem: s390 support" [1], which adds > virtio-mem support on s390. > > The only "different than everything else" thing about virtio-mem on s390 > is kdump: The crash (2nd) kernel allocates+prepares the elfcore hdr > during fs_init()->vmcore_init()->elfcorehdr_alloc(). Consequently, the > crash kernel must detect memory ranges of the crashed/panicked kernel to > include via PT_LOAD in the vmcore. > > On other architectures, all RAM regions (boot + hotplugged) can easily be > observed on the old (to crash) kernel (e.g., using /proc/iomem) to create > the elfcore hdr. > > On s390, information about "ordinary" memory (heh, "storage") can be > obtained by querying the hypervisor/ultravisor via SCLP/diag260, and > that information is stored early during boot in the "physmem" memblock > data structure. > > But virtio-mem memory is always detected by as device driver, which is > usually build as a module. So in the crash kernel, this memory can only be > properly detected once the virtio-mem driver started up. > > The virtio-mem driver already supports the "kdump mode", where it won't > hotplug any memory but instead queries the device to implement the > pfn_is_ram() callback, to avoid reading unplugged memory holes when reading > the vmcore. > > With this series, if the virtio-mem driver is included in the kdump > initrd -- which dracut already takes care of under Fedora/RHEL -- it will > now detect the device RAM ranges on s390 once it probes the devices, to add > them to the vmcore using the same callback mechanism we already have for > pfn_is_ram(). > > To add these device RAM ranges to the vmcore ("patch the vmcore"), we will > add new PT_LOAD entries that describe these memory ranges, and update > all offsets vmcore size so it is all consistent. > > Note that makedumfile is shaky with v6.12-rcX, I made the "obvious" things > (e.g., free page detection) work again while testing as documented in [2]. > > Creating the dumps using makedumpfile seems to work fine, and the > dump regions (PT_LOAD) are as expected. I yet have to check in more detail > if the created dumps are good (IOW, the right memory was dumped, but it > looks like makedumpfile reads the right memory when interpreting the > kernel data structures, which is promising). > > Patch #1 -- #6 are vmcore preparations and cleanups Thanks for CC-ing me, I will review the patch 1-6, vmcore part next week.