Hi guys.
These days I'm doing research on SR-IOV & Live migration. As we all know there is big problem that SR-IOV & Live migration can not exist at the same time.
I heard that KVM + SRIOV + MacVtap can solve this problem. So I want to try.
My environment:
Host: Dell R610, OS: RHEL 6.4 ( kernel 2.6.32)
NIC: intel 82599
I follow a document from intel guy, it said that I should write xml like below:
============================
<network>
<name>macvtap_passthrough’</name>
<forward mode=’passthrough>
<interface dev=’vf0’ />
<interface dev=’vf1’ />
.. ..
</forward>
</network>
<name>macvtap_passthrough’</name>
<forward mode=’passthrough>
<interface dev=’vf0’ />
<interface dev=’vf1’ />
.. ..
</forward>
</network>
============================
I guess here the vf0 & vf1 should be the VFs of Intel 82599.
What make me confused is that we know we can not see the vf 0 & vf 1 directly from the host server with "ifconfig", that is to say, vf 0 & vf1 are not a real physical interface.
I try #: virsh net-define macvtap_passthrough.xml
#: virsh net-start macvtap_passthrough
When I try to configure macvtap_passthrough for a VNIC of a VM, the virt-manager told : "Can't get vf 0, no such a device".
When I try from virt-manager: add hardware--->network--->host device (macvtap_passthrough:pass_through network), I got error like : "Error adding device: xmlParseDoc() failed".
I guess I can not write like this " <interface dev=’vf0’ />" in the xml.
I try to change as below, but the result is same.
============================
<network>
<name>macvtap_passthrough’</name>
<forward mode=’passthrough>
<pf dev=’p2p1’ /> // p2p1 is intel sriov physical nic
</forward>
</network>
<name>macvtap_passthrough’</name>
<forward mode=’passthrough>
<pf dev=’p2p1’ /> // p2p1 is intel sriov physical nic
</forward>
</network>
============================
I don't know how to write correctly. Please help me.
You can refer to intel document as below.
Many thanks.
==========document from intel========================
Linux/KVM VM Live Migration (SRIOV And MacVtap)
By Waseem Ahmad (waseem.ahmad@xxxxxxxxx
In this scenario we are using 3 machines:
Server 1: DNS/NFS – nfs.vtt.priv
Server 2: Hv1
Server3: Hv2
HV1 and HV2 are Linux/KVM machines. We will get to them in a minute however we first must address kvm and nfs.
NFS:
Create a storage area, where both HV1, and HV2 can access it. There are several methods available for this (FCOE/ISCSI/NFS). For this write-up use nfs.
Configure NFS:
Create a directory on nfs.vtt.priv where you want your storage to be. In this case used /home/vmstorage
Edit /etc/exports and add the following
/home/vmstorage 172.0.0.0/1(rw,no_root_squash,sync)
Now to /etc/sysconfig/nfs
Uncomment RPCNFSDARGS=”-N 4”
This will disable nfs v4. If you don’t do this you will have issues with not being able to access the share from within VirtManager.
Add all three machines ip addresses to each machines /hosts file.
MIGRATION WILL NOT WORK WITHOUT FULLY QUALIFIED DOMAIN NAMES.
KVM:
On both HV1, and HV2 servers:
Edit /etc/selinux/config
SELINUX=disabled
Edit /etc/libvirt/qemu.conf
Change security_driver=none
On HV1 and HV2 start Virtual Machine Manager
Double click on localhost(QEMU)
Then click on the storage tab at the top of the window that pops up
Down in the left hand corner is a box with a + sign in it, click on that. A new window will appear entitled Add a New Storage Pool
In the name box type vmstorage, then click on the type box and select netfs: Network Exported Directory, now click next.
You will see the last step of the network Storage Pool Dialog. The first option is the target path. This is the path where we will mount our storage on the local server. I have chosen to leave this alone.
The next option is format, leave this set on auto:
Host name: nfs.vtt.priv
Source path: /home/vmstorage
Click on finish
Repeat the above steps on HV2 server
Create vms
On HV1 server go back to the connection details screen, (this is the one that showed up when you double clicked on localhost (qemu), and click on the storage tab again.
Click on vmstorage then click on new volume at the bottom.
A new dialog will appear entitled add a storage volume.
In the Name box type vm1
In the Max Capacity box type 20000
And do the same in the allocation box then click finish.
Now you can close the connection details box by click on the x in the corner.
Now click on the terminal in the corner, right underneath file, and type the name of our vm in the box that is entitled Name, vm1 choose how your installation media, probably local install media, and click forward. Click on use cdrom or dvd, and place a rh6.2 dvd in the dvd player on HV1. Select Linux, for the OS type, and Red Hat Enterprise Linux 6 for the version. Memory I chose to leave this at its default of 1024, and assigned 1 cpu to the guest. Click forward, select “select managed or other existing storage” and click the browse button. Click on vmstoarge, and select vm1.img then click forward. Then click on finish.
We will configure network after we make sure migration between the two servers works properly.
Now go ahead and install the operating system as you would normally.
Create networks
Create a file that looks like the following < there is no support for adding a macvtap interface from the gui as of yet, this is the only manual step in the process. Create a file named macvtap_passthrough.xml with the following contents.
<network>
<name>macvtap_passthrough’</name>
<forward mode=’passthrough>
<interface dev=’vf0’ />
<interface dev=’vf1’ />
.. ..
</forward>
</network>
<network> <name>’macvtap_bridge’</name> <forward mode=’bridge’> <interface dev=’p3p1’/> </forward>
</network>
Save it and run the following commands:
virsh net-define macvtap_passthrough.xml
virsh net-start macvtap_passthrough
Make sure all of your virtual interfaces that you used in the xml file are up.
for i in $(ifconfig –a | awk ‘/eth/ {print $1}’); do ifconfig $i up; done
Then double click on your vm and click on the big blue i
On the next screen click on add hardware, then on network, then select Virtual network “macvtap_passthrough”
Then click on finish.
Start your vm and make sure that the macvtap was created on the host by doing
ip link | grep ‘macvtap’
In the vm configure the ip information for the virtio adapter.
In the virtual machine manager click on file, add connection.
Then check the connect to remote host fill in the username and hostname, then click on connect
Right click on your VM and select Migrate, select the host you want to migrate the machine to, then click on advanced options, check the address box, and type the ip address of the machine you want to migrate to, and click the migrate button.
By Waseem Ahmad (waseem.ahmad@xxxxxxxxx
In this scenario we are using 3 machines:
Server 1: DNS/NFS – nfs.vtt.priv
Server 2: Hv1
Server3: Hv2
HV1 and HV2 are Linux/KVM machines. We will get to them in a minute however we first must address kvm and nfs.
NFS:
Create a storage area, where both HV1, and HV2 can access it. There are several methods available for this (FCOE/ISCSI/NFS). For this write-up use nfs.
Configure NFS:
Create a directory on nfs.vtt.priv where you want your storage to be. In this case used /home/vmstorage
Edit /etc/exports and add the following
/home/vmstorage 172.0.0.0/1(rw,no_root_squash,sync)
Now to /etc/sysconfig/nfs
Uncomment RPCNFSDARGS=”-N 4”
This will disable nfs v4. If you don’t do this you will have issues with not being able to access the share from within VirtManager.
Add all three machines ip addresses to each machines /hosts file.
MIGRATION WILL NOT WORK WITHOUT FULLY QUALIFIED DOMAIN NAMES.
KVM:
On both HV1, and HV2 servers:
Edit /etc/selinux/config
SELINUX=disabled
Edit /etc/libvirt/qemu.conf
Change security_driver=none
On HV1 and HV2 start Virtual Machine Manager
Double click on localhost(QEMU)
Then click on the storage tab at the top of the window that pops up
Down in the left hand corner is a box with a + sign in it, click on that. A new window will appear entitled Add a New Storage Pool
In the name box type vmstorage, then click on the type box and select netfs: Network Exported Directory, now click next.
You will see the last step of the network Storage Pool Dialog. The first option is the target path. This is the path where we will mount our storage on the local server. I have chosen to leave this alone.
The next option is format, leave this set on auto:
Host name: nfs.vtt.priv
Source path: /home/vmstorage
Click on finish
Repeat the above steps on HV2 server
Create vms
On HV1 server go back to the connection details screen, (this is the one that showed up when you double clicked on localhost (qemu), and click on the storage tab again.
Click on vmstorage then click on new volume at the bottom.
A new dialog will appear entitled add a storage volume.
In the Name box type vm1
In the Max Capacity box type 20000
And do the same in the allocation box then click finish.
Now you can close the connection details box by click on the x in the corner.
Now click on the terminal in the corner, right underneath file, and type the name of our vm in the box that is entitled Name, vm1 choose how your installation media, probably local install media, and click forward. Click on use cdrom or dvd, and place a rh6.2 dvd in the dvd player on HV1. Select Linux, for the OS type, and Red Hat Enterprise Linux 6 for the version. Memory I chose to leave this at its default of 1024, and assigned 1 cpu to the guest. Click forward, select “select managed or other existing storage” and click the browse button. Click on vmstoarge, and select vm1.img then click forward. Then click on finish.
We will configure network after we make sure migration between the two servers works properly.
Now go ahead and install the operating system as you would normally.
Create networks
Create a file that looks like the following < there is no support for adding a macvtap interface from the gui as of yet, this is the only manual step in the process. Create a file named macvtap_passthrough.xml with the following contents.
<network>
<name>macvtap_passthrough’</name>
<forward mode=’passthrough>
<interface dev=’vf0’ />
<interface dev=’vf1’ />
.. ..
</forward>
</network>
<network> <name>’macvtap_bridge’</name> <forward mode=’bridge’> <interface dev=’p3p1’/> </forward>
</network>
Save it and run the following commands:
virsh net-define macvtap_passthrough.xml
virsh net-start macvtap_passthrough
Make sure all of your virtual interfaces that you used in the xml file are up.
for i in $(ifconfig –a | awk ‘/eth/ {print $1}’); do ifconfig $i up; done
Then double click on your vm and click on the big blue i
On the next screen click on add hardware, then on network, then select Virtual network “macvtap_passthrough”
Then click on finish.
Start your vm and make sure that the macvtap was created on the host by doing
ip link | grep ‘macvtap’
In the vm configure the ip information for the virtio adapter.
In the virtual machine manager click on file, add connection.
Then check the connect to remote host fill in the username and hostname, then click on connect
Right click on your VM and select Migrate, select the host you want to migrate the machine to, then click on advanced options, check the address box, and type the ip address of the machine you want to migrate to, and click the migrate button.
-- libvir-list mailing list libvir-list@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/libvir-list