Hyper-V ballooning and memory hotplug protocol always seems to assume 4k page size so all PFNs in the structures used for communication are 4k PFNs. In case a different page size is in use on the guest (e.g. 64k), things go terribly wrong all over: - When reporting statistics, post_status() reports them in guest pages and hypervisor sees very low memory usage. - When ballooning, guest reports back PFNs of the allocated pages but the hypervisor treats them as 4k PFNs. - When unballooning or memory hotplugging, PFNs coming from the host are 4k PFNs and they may not even be 64k aligned making it difficult to handle. While statistics and ballooning requests would be relatively easy to handle by converting between guest and hypervisor page sizes in the communication structures, handling unballooning and memory hotplug requests seem to be harder. In particular, when ballooning up alloc_balloon_pages() shatters huge pages so unballooning request can be handled for any part of it. It is not possible to shatter a 64k page into 4k pages so it's unclear how to handle unballooning for a sub-range if such request ever comes so we can't just report a 64k page as 16 separate 4k pages. Ideally, the protocol between the guest and the host should be changed to allow for different guest page sizes. While there's no solution for the above mentioned problems, it seems we're better off without the driver in problematic cases. Signed-off-by: Vitaly Kuznetsov <vkuznets@xxxxxxxxxx> --- drivers/hv/Kconfig | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig index 0747a8f1fcee..fb353a13e5c4 100644 --- a/drivers/hv/Kconfig +++ b/drivers/hv/Kconfig @@ -25,7 +25,7 @@ config HYPERV_UTILS config HYPERV_BALLOON tristate "Microsoft Hyper-V Balloon driver" - depends on HYPERV + depends on HYPERV && (X86 || (ARM64 && ARM64_4K_PAGES)) select PAGE_REPORTING help Select this option to enable Hyper-V Balloon driver. -- 2.33.1