From: Michael Schierl <schierlm@xxxxxx> Sent: Monday, April 15, 2024 2:03 PM > > > In any case, I see the same content for /sys/firmware/rmi/tables/DMI as > well as /sys/firmware/dmi/tables/smbios_entry_point on 32-bit vs. 64-bit > kernels. But I see different content when booted with 1 vs. 2 vCPU. > > So it is understandable to me why 1 vCPU behaves different from 2vCPU, > but not clear why 32-bit behaves different from 64-bit (assuming in both > cases the same parts of the dmi "blob" are parsed). > > > > If the DMI data is exactly the same, and a > > 64-bit kernel works, then perhaps there's a bug in the > > DMI parsing code when the kernel is compiled in 32-bit mode. > > > > Also, what is the output of "dmidecode | grep type", both on your > > patched 32-bit kernel and a working 64-bit kernel? > > > On 64-bit I see output on stderr as well as stdout. > > > Invalid entry length (0). DMI table is broken! Stop. > > The output before is the same when grepping for type > > Handle 0x0000, DMI type 0, 20 bytes > Handle 0x0001, DMI type 1, 25 bytes > Handle 0x0002, DMI type 2, 8 bytes > Handle 0x0003, DMI type 3, 17 bytes > Handle 0x0004, DMI type 11, 5 bytes > > > When not grepping for type, the only difference is the number of structures > > 1core: 339 structures occupying 17307 bytes. > 2core: 356 structures occupying 17307 bytes. > > I put everything (raw and hex) up at > <https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgist.github.com% > 2Fschierlm%2F4a1f38565856c49e4e4b534cf51961be&data=05%7C02%7C%7Ceaa9f9fd > d3ac480032a808dc5d8f79ec%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C63 > 8488118016043559%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV > 2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=dcWXWisnx9wwFt > P4wucScpfQdfI3w%2Fzih%2BGZTGbheJg%3D&reserved=0> > > > root@mhkubun:~# dmidecode | grep type > > Handle 0x0000, DMI type 0, 26 bytes > > Handle 0x0001, DMI type 1, 27 bytes > > Handle 0x0002, DMI type 3, 24 bytes > > Handle 0x0003, DMI type 2, 17 bytes > > Handle 0x0004, DMI type 4, 48 bytes > > Handle 0x0005, DMI type 11, 5 bytes > > Handle 0x0006, DMI type 16, 23 bytes > > Handle 0x0007, DMI type 17, 92 bytes > > Handle 0x0008, DMI type 19, 31 bytes > > Handle 0x0009, DMI type 20, 35 bytes > > Handle 0x000A, DMI type 17, 92 bytes > > Handle 0x000B, DMI type 19, 31 bytes > > Handle 0x000C, DMI type 20, 35 bytes > > Handle 0x000D, DMI type 32, 11 bytes > > Handle 0xFEFF, DMI type 127, 4 bytes > > That looks healthier than mine... Maybe it also depends on the host...? > > > Interestingly, there's no entry of type "10", though perhaps your > > VM is configured differently from mine. Try also > > > > dmidecode -u > > > > What details are provided for "type 10" (On Board Devices)? That > > may help identify which device(s) are causing the problem. Then I > > might be able to repro the problem and do some debugging myself. > > No type 10, but again the error on stderr (even with only 1 vCPU). > OK, good info. If the "dmidecode" program in user space is also complaining about a bad entry, then Hyper-V probably really has created a bad entry. Can you give me details of the Hyper-V VM configuration? Maybe a screenshot of the Hyper-V Manager "Settings" for the VM would be a good starting point, though some of the details are on sub-panels in the UI. I'm guessing your 32-bit Linux VM is a Generation 1 VM. FWIW, my example was a Generation 2 VM. When you ran a 64-bit Linux and did not have the problem, was that with exactly the same Hyper-V VM configuration, or a different config? Perhaps something about the VM configuration tickles a bug in Hyper-V and it builds a faulty DMI entry, so I'm focusing on that aspect. If we can figure out what aspect of the VM config causes the bad DMI entry to be generated, there might be an easy work-around for you in tweaking the VM config. Michael Kelley