From: Bagas Sanjaya <bagasdotme@xxxxxxxxx> Sent: Tuesday, January 7, 2025 8:07 PM > > On Tue, Jan 07, 2025 at 12:20:47PM -0800, mhkelley58@xxxxxxxxx wrote: > > +VMBus devices are identified by class and instance GUID. (See section > > +"VMBus device creation/deletion" in > > +Documentation/virt/hyperv/vmbus.rst.) Upon resume from hibernation, > > +the resume functions expect that the devices offered by Hyper-V have > > +the same class/instance GUIDs as the devices present at the time of > > +hibernation. Having the same class/instance GUIDs allows the offered > > +devices to be matched to the primary VMBus channel data structures in > > +the memory of the now resumed hibernation image. If any devices are > > +offered that don't match primary VMBus channel data structures that > > +already exist, they are processed normally as newly added devices. If > > +primary VMBus channels that exist in the resumed hibernation image are > > +not matched with a device offered in the resumed VM, the resume > > +sequence waits for 10 seconds, then proceeds. But the unmatched device > > +is likely to cause errors in the resumed VM. > > Did you mean for example, conflicting synthetic NICs? In the resumed hibernation image, the unmatched device is in a weird state where it is exist and has a driver, but is no longer "open" in the VMBus layer. Any attempt to do I/O to the device will fail, and interrupts received from the device are ignored. Presumably there's user space software or a network connection that has the device open and expects to be able to interact with it. That software will error out due to the I/O failure. I haven't thought through all the implications of such a scenario, so just left the documentation as "likely to cause errors" without going into detail. It's an unsupported scenario, so not likely something that will be improved. I don't think the issue is necessarily conflicting NICs, though if a NIC with a different instance GUID was offered, it would show up as a new NIC in the resumed image, and that might cause conflicts/confusion with the "dead" NIC. > > > +The Linux ends of Hyper-V sockets are forced closed at the time of > > +hibernation. The guest can't force closing the host end of the socket, > > +but any host-side actions on the host end will produce an error. > > Nothing can be done on host-side? Not really. Whatever host-side software that is using the Hyper-V socket will just get an error that next time it tries to do I/O over the socket. Is there something you had in mind that the host could/should do? > > > +Virtual PCI devices are physical PCI devices that are mapped directly > > +into the VM's physical address space so the VM can interact directly > > +the hardware. vPCI devices include those accessed via what Hyper-V > "... interact directly with the hardware." Thanks for your careful reading. I'll add the missing "with". :-) > > +calls "Discrete Device Assignment" (DDA), as well as SR-IOV NIC > > +Virtual Functions (VF) devices. See Documentation/virt/hyperv/vpci.rst. > > + > > <snipped>... > > +SR-IOV NIC VFs similarly have a VMBus identity as well as a PCI > > +identity, and overall are processed similarly to DDA devices. A > > +difference is that VFs are not offered to the VM during initial boot > > +of the VM. Instead, the VMBus synthetic NIC driver first starts > > +operating and communicates to Hyper-V that it is prepared to accept a > > +VF, and then the VF offer is made. However, if the VMBus connection is > > +unloaded and then re-established without the VM being rebooted (as > > +happens in Steps 3 and 5 in the Detailed Hibernation Sequence above, > > +and similarly in the Detailed Resume Sequence), VFs are already part > "... that are already ..." Right. I'll fix this wording problem as well. Michael > > +of the VM and are offered to the re-established VMBus connection > > +without intervention by the synthetic NIC driver. > > Thanks. > > -- > An old man doll... just what I always wanted! - Clara