Re: Intel VT-d and KVM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Aug 7, 2010 at 7:05 PM, Nirmal Guhan <vavatutu@xxxxxxxxx> wrote:
> On Sat, Aug 7, 2010 at 8:29 AM, Alex Williamson
> <alex.williamson@xxxxxxxxxx> wrote:
>> On Fri, 2010-08-06 at 17:29 -0700, Nirmal Guhan wrote:
>>> On Thu, Aug 5, 2010 at 10:44 PM, Alex Williamson
>>> <alex.williamson@xxxxxxxxxx> wrote:
>>> > On Thu, Aug 5, 2010 at 12:53 PM, Nirmal Guhan <vavatutu@xxxxxxxxx> wrote:
>>> >> Hi,
>>> >>
>>> >> Am using Fedora 12 2.6.32.10-90.fc12.i686 on both host and guest. I
>>> >> see that the packets destined for a particular port (iperf/5001 if
>>> >> that matters) in guest can be captured using "tcpdump" on host whereas
>>> >> the reverse is not true i.e I run iperf server on host and tcpdump on
>>> >> guest can not read the packets sent to host. Is this expected behavior
>>> >> ?
>>> >
>>> > Yes.
>>>
>>> So all the packets are recd by the host kernel and sent to guest ? Is
>>> this the high level flow?
>>
>> Yes, when you used bridged/tap networking, all packets first go to the
>> host, the bridge, then the guests.
>>
>>> >
>>> >> I have enabled VT-d (through intel_iommu=on) and so was thinking that
>>> >> guest will read the packets directly. If this is true, then wonder how
>>> >> tcpdump on host can read guest packets ? or is my understanding wrong
>>> >> ? Please clarify.
>>> >
>>> > Enabling VT-d on the host is only the first step, that doesn't
>>> > automatically change the behavior of the guest.  VT-d allows you to
>>> > make use of the -pcidevice (or preferably -device pci-assign) option
>>> > to kvm, which exposes a PCI device directly to the guest.  For
>>> > instance if you have a NIC that you want to dedicate to a guest at PCI
>>> > address 00:19.0, you can use "-device pci-assign,host=00:19.0", which
>>> > should show up (more than likely at a different PCI address) in the
>>> > guest.  (you'll have to unbind the device from host drivers, but the
>>> > error messages will tell you how to do that) In this model, packets
>>> > destined for the guest are only seen by the guest.
>>>
>>> Thanks. This worked, surprisingly with a performance penalty. The
>>> guest ethernet device (eth2 in my case) came up with 10Mb/s speed. I
>>> changed the speed to 100Mb/s using ethtool but still the performance
>>> (Mbits/sec using iperf) did not improve. Any clues?
>>
>> What's the device? (lspci -vvv from the host)  Link speed shouldn't
>> depend on VM performance.  What are you using to measure performance?
>
> It is Intel e1000e driver.
> # lspci -vvv
> 00:19.0 Ethernet controller: Intel Corporation Device 10ef (rev 06)
>        Subsystem: Intel Corporation Device 0000
>        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop-
> ParErr- Stepping- SERR- FastB2B- DisINTx+
>        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort-
> <TAbort- <MAbort- >SERR- <PERR- INTx-
>        Latency: 0
>        Interrupt: pin A routed to IRQ 41
>        Region 0: Memory at ff500000 (32-bit, non-prefetchable) [size=128K]
>        Region 1: Memory at ff570000 (32-bit, non-prefetchable) [size=4K]
>        Region 2: I/O ports at f040 [size=32]
>        Capabilities: [c8] Power Management version 2
>                Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA
> PME(D0+,D1-,D2-,D3hot+,D3cold+)
>                Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=1 PME-
>        Capabilities: [d0] MSI: Enable+ Count=1/1 Maskable- 64bit+
>                Address: 00000000fee0f00c  Data: 4182
>        Capabilities: [e0] PCI Advanced Features
>                AFCap: TP+ FLR+
>                AFCtrl: FLR-
>                AFStatus: TP-
>        Kernel driver in use: e1000e
>        Kernel modules: e1000e
>
> Am using "iperf" to measure performance with identical invocation for
> both pci-passthrough(VT-d) and no pci-passthrough case. Command used :
> "iperf -c <addr> -w 16000" <- window size selected was 32K though 16K
> was requested.
>  0.0-30.0 sec  33.6 MBytes  9.39 Mbits/sec <- guest with pci pass-through + VT-d
> 0.0-30.0 sec    324 MBytes  90.6 Mbits/sec  <-- guest without pci pass-through
>  0.0-30.0 sec   334 MBytes  93.3 Mbits/sec <--- host
>
> --Nirmal
>
>
>
>
>
>> Thanks,
>>
>> Alex

Adding kvm forum back. Please help!

Thanks, Nirmal
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux