On 2/5/25 18:07, Peter Xu wrote:
On Wed, Feb 05, 2025 at 05:27:13PM +0100, William Roche wrote:
[...]
The HMP command "info ramblock" is implemented with the ram_block_format()
function which returns a message buffer built with a string for each
ramblock (protected by the RCU_READ_LOCK_GUARD). Our new function copies a
struct with the necessary information.
Relaying on the buffer format to retrieve the information doesn't seem
reasonable, and more importantly, this buffer doesn't provide all the needed
data, like fd and fd_offset.
I would say that ram_block_format() and qemu_ram_block_info_from_addr()
serve 2 different goals.
(a reimplementation of ram_block_format() with an adapted version of
qemu_ram_block_info_from_addr() taking the extra information needed could be
doable for example, but may not be worth doing for now)
IIUC admin should be aware of fd_offset because the admin should be fully
aware of the start offset of FDs to specify in qemu cmdlines, or in
Libvirt. But yes, we can always add fd_offset into ram_block_format() if
it's helpful.
Besides, the existing issues on this patch:
- From outcome of this patch, it introduces one ramblock API (which is ok
to me, so far), to do some error_report()s. It looks pretty much for
debugging rather than something serious (e.g. report via QMP queries,
QMP events etc.). From debug POV, I still don't see why this is
needed.. per discussed above.
The reason why I want to inform the user of a large memory failure more
specifically than a standard sized page loss is because of the
significant behavior difference: Our current implementation can
transparently handle many situations without necessarily leading the VM
to a crash. But when it comes to large pages, there is no mechanism to
inform the VM of a large memory loss, and usually this situation leads
the VM to crash, and can also generate some weird situations like qemu
itself crashing or a loop of errors, for example.
So having a message informing of such a memory loss can help to
understand a more radical VM or qemu behavior -- it increases the
diagnosability of our code.
To verify that a SIGBUS appeared because of a large page loss, we
currently need to verify the targeted memory block backend page_size.
We should usually get this information from the SIGBUS siginfo data
(with a si_addr_lsb field giving an indication of the page size) but a
KVM weakness with a hardcoded si_addr_lsb=PAGE_SHIFT value in the SIGBUS
siginfo returned from the kernel prevents that: See
kvm_send_hwpoison_signal() function.
So I first wrote a small API addition called
qemu_ram_pagesize_from_addr() to retrieve only this page_size value from
the impacted address; and later on, this function turned into the richer
qemu_ram_block_info_from_addr() function to have the generated messages
match the existing memory messages as rightly requested by David.
So the main reason is a KVM "weakness" with kvm_send_hwpoison_signal(),
and the second reason is to have richer error messages.
- From merge POV, this patch isn't a pure memory change, so I'll need to
get ack from other maintainers, at least that should be how it works..
I agree :)
I feel like when hwpoison becomes a serious topic, we need some more
serious reporting facility than error reports. So that we could have this
as separate topic to be revisited. It might speed up your prior patches
from not being blocked on this.
I explained why I think that error messages are important, but I don't
want to get blocked on fixing the hugepage memory recovery because of that.
If you think that not displaying a specific message for large page loss
can help to get the recovery fixed, than I can change my proposal to do so.
Early next week, I'll send a simplified version of my first 3 patches
without this specific messages and without the preallocation handling in
all remap cases, so you can evaluate this possibility.
Thank again for your feedback
William.