Hey folks,
I am curious to understand a bit more the core use of the emulatorpin CPUs with libvirt.
For example :
<vcpupin vcpu='0' cpuset='34'/>
<vcpupin vcpu='1' cpuset='14'/>
<vcpupin vcpu='2' cpuset='10'/>
<vcpupin vcpu='3' cpuset='30'/>
<emulatorpin cpuset='10,14,30,34'/>
<vcpupin vcpu='1' cpuset='14'/>
<vcpupin vcpu='2' cpuset='10'/>
<vcpupin vcpu='3' cpuset='30'/>
<emulatorpin cpuset='10,14,30,34'/>
In this case :
- The VM has 4 vCPUs.
- Each of the core is pinned to a physical/thread core - 34,14,10,30
- The emulatorpin is also attached to the same physical CPUs.
In our case :
- Same setup for cores.
- Running DPDK inside the virtual machine.
- The compute/libvirt is using OVS but without DPDK.
What we saw is the following :
- DPDK is polling two cores at 100% to process packets.
- From the compute metrics, we see USER %usage at 100% when the application starts.
- From the compute metrics, as traffic increases, we see %SYSTEM going up.
- As %SYSTEM goes up, %USER goes down.
We are not quite sure, but we are wondering if this could cause a direct impact to the ability of DPDK to process traffic at the same X speed?
- We can "live" move the emulator CPU to other cores.
- This seems to be the recommended default setup for VNF/NFV deployments.
Feels like I am missing a piece of the puzzle but gotta start somewhere!
Thanks!