[please cc: me as I am not subscribed to either mailing list] Hello, I am writing to you as Jean is listed as maintainer of dmi, and the rest are listed as maintainer for Hyper-V drivers. If I should have written elsewhere, please kindly point me to the correct location. I am having issues running 32-bit Debian (kernel 6.1.0) on Hyper-V on Windows 11 (10.0.22631.3447) when the virtual machine has assigned more than one vCPU. The kernel does not boot and no output is shown on screen. I was able to redirect early printk to serial port and capture this panic:
early console in setup code Probing EDD (edd=off to disable)... ok [ 0.000000] Linux version 6.1.0-18-686-pae (debian-kernel@xxxxxxxxxxxxxxxx) (gcc-12 (Debian 12.2.0-14) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40) #1 SMP PREEMPT_DYNAMIC Debian 6.1.76-1 (2024-02-01) [ 0.000000] BIOS-provided physical RAM map: [ 0.000000] BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable [ 0.000000] BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved [ 0.000000] BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved [ 0.000000] BIOS-e820: [mem 0x0000000000100000-0x000000007ffeffff] usable [ 0.000000] BIOS-e820: [mem 0x000000007fff0000-0x000000007fffefff] ACPI data [ 0.000000] BIOS-e820: [mem 0x000000007ffff000-0x000000007fffffff] ACPI NVS [ 0.000000] printk: bootconsole [earlyser0] enabled [ 0.000000] NX (Execute Disable) protection: active [ 0.000000] BUG: unable to handle page fault for address: ffa45000 [ 0.000000] #PF: supervisor read access in kernel mode [ 0.000000] #PF: error_code(0x0000) - not-present page [ 0.000000] *pdpt = 000000000fe74001 [ 0.000000] Oops: 0000 [#1] PREEMPT SMP NOPTI [ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 6.1.0-18-686-pae #1 Debian 6.1.76-1 [ 0.000000] EIP: dmi_decode+0x2e3/0x40e [ 0.000000] Code: 10 53 e8 b8 f9 ff ff 83 c4 0c e9 3e 01 00 00 0f b6 7e 01 31 db 83 ef 04 d1 ef 39 df 0f 8e 2b 01 00 00 8a 4c 5e 04 84 c9 79 1e <0f> b6 54 5e 05 89 f0 88 4d f0 e8 c0 f7 ff ff 8a 4d f0 89 c2 89 c8 [ 0.000000] EAX: cff6d220 EBX: 000024bd ECX: cfd2caff EDX: cf9e942c [ 0.000000] ESI: ffa40681 EDI: 7ffffffe EBP: cfc37e90 ESP: cfc37e80 [ 0.000000] DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068 EFLAGS: 00210086 [ 0.000000] CR0: 80050033 CR2: ffa45000 CR3: 0fe78000 CR4: 00000020 [ 0.000000] Call Trace: [ 0.000000] ? __die_body.cold+0x14/0x1a [ 0.000000] ? __die+0x21/0x26 [ 0.000000] ? page_fault_oops+0x69/0x120 [ 0.000000] ? uuid_string+0x157/0x1a0 [ 0.000000] ? kernelmode_fixup_or_oops.constprop.0+0x80/0xe0 [ 0.000000] ? __bad_area_nosemaphore.constprop.0+0xfc/0x130 [ 0.000000] ? bad_area_nosemaphore+0xf/0x20 [ 0.000000] ? do_kern_addr_fault+0x79/0x90 [ 0.000000] ? exc_page_fault+0xbc/0x160 [ 0.000000] ? paravirt_BUG+0x10/0x10 [ 0.000000] ? handle_exception+0x133/0x133 [ 0.000000] ? dmi_disable_osi_vista+0x1/0x37 [ 0.000000] ? paravirt_BUG+0x10/0x10 [ 0.000000] ? dmi_decode+0x2e3/0x40e [ 0.000000] ? dmi_disable_osi_vista+0x1/0x37 [ 0.000000] ? paravirt_BUG+0x10/0x10 [ 0.000000] ? dmi_decode+0x2e3/0x40e [ 0.000000] ? dmi_smbios3_present+0xd8/0xd8 [ 0.000000] dmi_decode_table+0xa9/0xe0 [ 0.000000] ? dmi_smbios3_present+0xd8/0xd8 [ 0.000000] ? dmi_smbios3_present+0xd8/0xd8 [ 0.000000] dmi_walk_early+0x34/0x58 [ 0.000000] dmi_present+0x149/0x1b6 [ 0.000000] dmi_setup+0x18d/0x22e [ 0.000000] setup_arch+0x676/0xd3f [ 0.000000] ? lockdown_lsm_init+0x1c/0x20 [ 0.000000] ? initialize_lsm+0x33/0x4e [ 0.000000] start_kernel+0x65/0x644 [ 0.000000] ? set_intr_gate+0x45/0x58 [ 0.000000] ? early_idt_handler_common+0x44/0x44 [ 0.000000] i386_start_kernel+0x48/0x4a [ 0.000000] startup_32_smp+0x161/0x164 [ 0.000000] Modules linked in: [ 0.000000] CR2: 00000000ffa45000 [ 0.000000] ---[ end trace 0000000000000000 ]--- [ 0.000000] EIP: dmi_decode+0x2e3/0x40e [ 0.000000] Code: 10 53 e8 b8 f9 ff ff 83 c4 0c e9 3e 01 00 00 0f b6 7e 01 31 db 83 ef 04 d1 ef 39 df 0f 8e 2b 01 00 00 8a 4c 5e 04 84 c9 79 1e <0f> b6 54 5e 05 89 f0 88 4d f0 e8 c0 f7 ff ff 8a 4d f0 89 c2 89 c8 [ 0.000000] EAX: cff6d220 EBX: 000024bd ECX: cfd2caff EDX: cf9e942c [ 0.000000] ESI: ffa40681 EDI: 7ffffffe EBP: cfc37e90 ESP: cfc37e80 [ 0.000000] DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068 EFLAGS: 00210086 [ 0.000000] CR0: 80050033 CR2: ffa45000 CR3: 0fe78000 CR4: 00000020 [ 0.000000] Kernel panic - not syncing: Attempted to kill the idle task! [ 0.000000] ---[ end Kernel panic - not syncing: Attempted to kill the idle task! ]---
The same panic can be reproduced with vanilla 6.8.4 kernel. By adding some (or rather a lot of) printk into dmi_scan.c, I believe that the issue is caused by this line: <https://github.com/torvalds/linux/blob/13a0ac816d22aa47d6c393f14a99f39e49b960df/drivers/firmware/dmi_scan.c#L295> Or rather by a dmi_header with dm->type == 10 and dm->length == 0. As the length is (unsigned) zero, after subtracting the (unsigned) header length and dividing by two, count is slightly below signed integer max value (and stays there after being casted to signed), resulting in the loop running "forever" until it reaches non-mapped memory, resulting in the panic above. I am unsure who is the culprit, whether DMI header is supposed to not have length zero or whether Linux is supposed to parse it more gracefully. In any case, when adding an extra if clause to this function to return early in case dm->length is zero, the system boots fine and appears to work fine at first glance. As I unfortunately have no idea what DMI is used for by the kernels, I do not know if there are any other things I should test, since the "Onboard device information" is obviously missing. If I should perform other tests, please tell me. Otherwise I hope that either an update of Hyper-V or the Linux kernel (or maybe some kernel parameter I missed) can make 32-bit Linux bootable on Hyper-V again in the future. [Slightly off-topic: As 64-bit kernels work fine, if there are ways to run a 32-bit userland containerized or chrooted in a 64-bit kernel so that the userland (espeically uname and autoconf) cannot distinguish from a 32-bit kernel, that might be another option for my use case. Nested virtualization would of course also work, but the performance loss due to nested virtualization negates the effect of being able to pass more than one of the (2 physical, 4 hyperthreaded) cores of my laptop to the VM]. Thanks for help and best regards, Michael