lizhijian@xxxxxxxxxxx wrote: > > > On 17/03/2023 14:12, Dan Williams wrote: > > lizhijian@xxxxxxxxxxx wrote: > > [..] > >> Case D: unsupported && need your input To support this situation, the > >> makedumpfile needs to know the location of metadata for each pmem > >> namespace and the address and size of metadata in the pmem [start, > >> end) > > > > My first reaction is that you should copy what the ndctl utility does > > when it needs to manipulate or interrogate the metadata space. > > > > For example, see namespace_rw_infoblock():> > > https://github.com/pmem/ndctl/blob/main/ndctl/namespace.c#L2022 > > > > That facility uses the force_raw attribute > > ("/sys/bus/nd/devices/namespaceX.Y/force_raw") to arrange for the > > namespace to initalize without considering any pre-existing metdata > > *and* without overwriting it. In that mode makedumpfile can walk the > > namespaces and retrieve the metadata written by the previous kernel. > > For the dumping application(makedumpfile or cp), it will/should reads > /proc/vmcore to construct the dumpfile, So makedumpfile need to know > the *address* and *size/end* of metadata in the view of 1st kernel > address space. Another option, instead of passing the metadata layout into the crash kernel, is to just parse the infoblock and calculate teh boundaries of userdata and metadata. > I haven't known much about namespace_rw_infoblock() , so it is also an > option if we can know such information from it. > > My current WIP propose is to export a list linking all pmem namespaces > to vmcore, with this, the kdump kernel don't need to rely on the pmem > driver. Seems like more work to avoid using the pmem driver as new information passing infrastructure needs to be built vs reusing what is already there. _______________________________________________ kexec mailing list kexec@xxxxxxxxxxxxxxxxxxx http://lists.infradead.org/mailman/listinfo/kexec