On 12/14/20 at 10:50am, Eric DeVolder wrote: ... > The cell contents show the number of seconds it took for the system to > process all of the 3840 memblocks. The value in parenthesis is the > number of kdump unload-then-reload operations per second. > > 1 480GB DIMM 480 1GB DIMMs > -------+-----------------+----------------+ > RHEL7 | 181s (21.2 ops) | 389s (9.8 ops) | > -------+-----------------+----------------+ > RHEL8 | 86s (44.7 ops) | 419s (9.2 ops) | > -------+-----------------+----------------+ > > The scenario of adding 480 1GiB virtual DIMMs takes more time given > the larger number of round trips of QEMU -> kernel -> udev -> kernel -> > QEMU, and are both roughly 400s. > > The RHEL7 system process all 3840 memblocks individually and perform > 3840 kdump unload-then-reload operations. > > However, RHEL8 data in the best case scenario (1 480GiB DIMM) suggests > that approximately 86/4= 21 kdump unload-then-reload operations > happened, and in the worst case scenario (480 1GiB DIMMs), the data > suggests that approximately 419/4 = 105 kdump unload-then-reload > operations happened. For RHEL8, the final number of kdump > unload-then-reload operations are 0.5% (21 of 3840) and 2.7% (105 of > 3840), respectively, compared to that of the RHEL7 system. > > The throttle approach is quite effective in reducing the number of > kdump unload-then-reload operations. However, the kdump capture kernel > is still reloaded multiple times, and each kdump capture kernel reload > is a race window in which kdump can fail. > > A quick peek at Ubuntu 20.04 LTS reveals it has 50-kdump-tools.rules > that looks like: > > SUBSYSTEM=="memory", ACTION=="online", PROGRAM="/usr/sbin/kdump-config try-reload" > SUBSYSTEM=="memory", ACTION=="offline", PROGRAM="/usr/sbin/kdump-config try-reload" > SUBSYSTEM=="cpu", ACTION=="add", PROGRAM="/usr/sbin/kdump-config try-reload" > SUBSYSTEM=="cpu", ACTION=="remove", PROGRAM="/usr/sbin/kdump-config try-reload" > SUBSYSTEM=="cpu", ACTION=="offline", PROGRAM="/usr/sbin/kdump-config try-reload" > > which produces the equivalent behavior to RHEL7 whereby every event > results in a kdump capture kernel reload. > > Fedora 33 and CentOS 8-stream behave the same as RHEL8. > > Perhaps a better solution is to rewrite the vmcoreinfo structure that > contains the memory and CPU layout information, as those changes to > memory and CPUs occur. Rewriting vmcoreinfo is an in-kernel activity > and would certainly avoid the relatively large unload-then-reload > times of the kdump capture kernel. The pointer to the vmcoreinfo > structure is provided to the capture kernel via the elfcorehdr= > parameter to the capture kernel cmdline. Rewriting the vmcoreinfo > structure as well as rewriting the capture kernel cmdline parameter is > needed to utilize this approach. Great investigation and conclusion, and very nice idea as below. When I read the first half of this mail, I thought maybe we could add a new option to kexec-tools utility for updating eflcorehdr only when hotplug udev events detected. Then come to this part, I would say yes, doing it inside kernel looks better. A special handling for hotplug looks necessary as you have said, I will check what we can do and give back some details, thanks for doing these. Thanks Baoquan > > Based upon some amount of examining code, I think the challenges > involved in updating the CPU and memory layout in-kernel are: > > - adding call-outs on the add_memory()/try_remove_memory() and > cpu_up()/cpu_down() paths for notifying the kdump subsystem of > memory and/or CPU changes. > > - updating the struct kimage with the memory or CPU changes > > - Rewriting the vmcoreinfo structure from the data contained > in struct kimage, eg crash_prepare_elf64_headers() > > - Installing the updated vmcoreinfo struct via > kimage_crash_copy_vmcoreinfo() and rewriting the kdump kernel > cmdline in order to update parameter elfcorehdr= with the > new address > > As I am not overly familiar with all the code paths involved, yet, I'm > sure the devil is in the details. However, due the kexec_file_load > syscall, it appears most of the infrastructure is already in place, > and we essentially need to tap into it again for memory and cpu > changes. > > It appears that this change could be applicable to both kexec_load and > kexec_file_load, it has the potential to (eventually) simplify the > userland kexec utility for kexec_load, and would eliminate the need > for 98-kexec.rules and the associated churn. > > Comments please! > eric > > _______________________________________________ > kexec mailing list > kexec@xxxxxxxxxxxxxxxxxxx > http://lists.infradead.org/mailman/listinfo/kexec > _______________________________________________ kexec mailing list kexec@xxxxxxxxxxxxxxxxxxx http://lists.infradead.org/mailman/listinfo/kexec