Re: Unable to pass SATA controller to VM with intel_iommu=igfx_off

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



At first, thank you very much!

On 08.01.2018 23:19, Alex Williamson wrote:

> We already have quirks to support various other versions of the Marvell
> chip, but the 9128 is missing, so it's just a couple lines to add it.
> This is against v4.9.75:
> 
> diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
> index 98eba9127a0b..19ca3c9fac3a 100644
> --- a/drivers/pci/quirks.c
> +++ b/drivers/pci/quirks.c
> @@ -3868,6 +3868,8 @@ static void quirk_dma_func1_alias(struct pci_dev *dev)
>  /* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c49 */
>  DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9230,
>  			 quirk_dma_func1_alias);
> +DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9128,
> +			 quirk_dma_func1_alias);
>  DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_TTI, 0x0642,
>  			 quirk_dma_func1_alias);
>  /* https://bugs.gentoo.org/show_bug.cgi?id=497630 */

There is good news, and there is bad news:

The good news is that the patch works as expected. I have applied it to
kernel 4.9 and recompiled the kernel (which was not that easy for me
because this machine boots from ZFS, so beware of forgetting to
recompile and correctly include the ZFS modules as well into the new
kernel / initramfs ...).

I then have booted the new kernel with intel_iommu=on. The boot process
went normally - the AHCI / SATA driver now is behaving correctly when
initializing the controller in question.

I then have configured my system to let the vfio_pci kernel driver grab
that controller during the boot process, and have made sure that
vfio_pci gets loaded before the AHCI kernel driver.

That also worked well; dmesg |grep vfio was showing the expected output,
and lspci was showing that the controller indeed was under control of
vfio_pci.

But I couldn't get any further, and this is the bad news:

I have spent the rest of my day with trying to actually pass through the
controller to the VM in question. I am starting this VM by command line:

qemu_xxx <option>

My first step was to change the machine model from pc (the default) to
q35 because I thought it would be a good idea to use the default pcie.0
bus that model provides.

Since https://github.com/qemu/qemu/blob/master/docs/pcie.txt says that
we shouldn't connect PCIe devices directly to the pcie.0 bus, I have
then added

-device
ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=2,chassis=1,id=root.1

to the command line and booted the VM. This went normally, but of course
the OS in the VM did not find the controller because the above line only
adds a new PCIe root bus, but does not pass through the controller.
However, I am considering it noteworthy that this worked.

As the final step, I added

-device vfio-pci,host=02:00.0,bus=root.1,addr=00.0

to the command line.

For the rest of the day, I have tested any combination of vfio-pci and
ioh3420 options which I have found in various tutorials / threads and
which came to my mind. With every of these combinations, the VM shows
the same behavior:

The Seabios boot screen hangs for about a minute or so. Then the OS
(W2K8 R2 server 64 bit) hangs forever at the first screen which shows
the progress bar. By booting into safe mode, I have found out that this
happens when it tries to load the classpnp.sys driver.

In some cases, when starting the VM, there was a message on the console
saying it was disabling IRQ 16.

This is the point where I am lost (again).

I think I have done something very basic badly wrong; my interpretation
is that it does not find the bus topology it expects. What scares me is
that even the Seabios already hangs although the greatest parts of the
articles out there proposes exactly (more or less) what I am doing.

Could my (Debian stretch's) qemu be too old (it is 2.8.0)?

Or does quemu / vfio_pci have the same requester problem as the kernel?

What else could be the reason?

An example of a command line I have used:

/usr/bin/qemu-system-x86_64
-machine q35,accel=kvm
-cpu host
-smp cores=2,threads=2,sockets=1
-rtc base=localtime,clock=host,driftfix=none
-drive file=/vm-image/dax.img,format=raw,if=virtio,cache=writeback,index=0
-drive file=/dev/sda,format=raw,if=virtio,cache=none,index=1
-device
ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=2,chassis=1,id=root.1
-device vfio-pci,host=02:00.0,bus=root.1,addr=00.0
-boot c
-pidfile /root/qemu-kvm/qemu-dax.pid
-m 12288
-k de
-daemonize
-usb -usbdevice "tablet"
-name dax
-device virtio-net-pci,vlan=0,mac=02:01:01:01:02:01
-net
tap,vlan=0,name=dax,ifname=dax0,script=/etc/qemu-ifup,downscript=/etc/qemu-ifdown
-vnc :2

> Personally, I don't often assign storage controllers and they're mostly
> all terrible.  The Marvell controllers nearly all have this DMA
> aliasing issue afaik, but you can see in the code nearby the patch
> above that we already have quirks for many of them.  For instance you
> could buy a similarly priced Marvell 9230 card rather than a 9128 and
> it might have worked since we've already got a quirk for it.  Sorry I
> can't be more precise, even as the device assignment maintainer I
> generally use virtio for VM disks and find it to be sufficiently fast
> and feature-ful.  Thanks,

Thank you very much - no problem. Just a short explanation: My issue is
not performance; instead, I need to be able to dynamically mount and
unmount ("eject") disks from within the VM (via the famous Windows tray
icon "Safely remove hardware").

Some days ago, Paolo Bonzini on this list has explained me how I could
achieve clean removal of HDDs from a VM, either using SCSI hotplug or
PCIe hotplug. Both suggestions were working at the first sight.

However, I am not sure if W2K8 R2 does reliably handle SCSI / PCIe
hotplug every time, and both methods require commands in the VM and in
the host system.

For my use case (changing a disk twice a day without restarting the VM)
this is too complicated and error prone; I really would like a solution
where I only need to eject the disk from within the Windows VM. If I
finally could pass through that (or another) SATA controller into the
VM, this problem would be solved the most elegant way.

Thank you very much again for any help,

Binarus



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux