From: Hamza Mahfooz <hamzamahfooz@xxxxxxxxxxxxxxxxxxx> Sent: Monday, January 27, 2025 1:43 PM > > On Mon, Jan 27, 2025 at 09:02:22PM +0000, Michael Kelley wrote: > > From: Hamza Mahfooz <hamzamahfooz@xxxxxxxxxxxxxxxxxxx> Sent: Monday, January 27, 2025 10:10 AM > > > > > > We should select PCI_HYPERV here, otherwise it's possible for devices to > > > not show up as expected, at least not in an orderly manner. > > > > The commit message needs more precision: What does "not show up" > > mean, and what does "not in an orderly manner" mean? And "it's possible" > > is vague -- can you be more specific about the conditions? Also, avoid > > the use of personal pronouns like "we". > > > > But the commit message notwithstanding, I don't think this is change > > that should be made. CONFIG_PCI_HYPERV refers to the VMBus device > > driver for handling vPCI (a.k.a PCI pass-thru) devices. It's perfectly > > possible and normal for a VM on Hyper-V to not have any such devices, > > in which case the driver isn't needed and should not be forced to be > > included. (See Documentation/virt/hyperv/vpci.rst for more on vPCI > > devices.) > > Ya, we ran into an issue where CONFIG_NVME_CORE=y and > CONFIG_PCI_HYPERV=m caused the passed-through SSDs not to show up (i.e. > they aren't visible to userspace). I guess it's cause PCI_HYPERV has > to load in before the nvme stuff for that workload. So, I thought it was > reasonable to select PCI_HYPERV here to prevent someone else from > shooting themselves in the foot. Though, I guess it really it on the > distro guys to get that right. > Hmmm. By itself, the combination of CONFIG_NVME_CORE=y and CONFIG_PCI_HYPERV=m should not cause a problem for an NVMe data disk. If you are seeing a problem with that combo for NVMe data disks, then maybe something else is going wrong. However, things are trickier if the NVMe disk is the boot disk with the OS. In that case, that CONFIG_* combination is still OK, but the Hyper-V PCI driver module *must* be included in the initramfs image so that they can be loaded and used when finding and mounting the root file system. Same thing is true for Hyper-V storvsc when the boot disk is a SCSI disk -- the storvsc driver and generic SCSI stack must either be built-in, or the modules included in the initramfs. The need to have NVME_CORE and the Hyper-V PCI driver available to mount an NVMe root disk is another case where different distros have taken different approaches. Some make them built-in to the kernel image so they don't have to worry about the initramfs, while other distros make them modules and include them initramfs. Michael