Hi Yalan,
1)
For inbound, we can use `ovs-vsctl list qos` and `ovs-vsctl list queue` to check them from the openvswitch side. Values can be found
in other_config. Inbound is in kbyte when set qos with `virsh domiftune …`, well it is in bit in ovs, Therefore, when inbound.average
is set to 100, the corresponding value will be set to 819200 in ovs.
2)
For outbound, it is in kbyte in libvirt and ingress_policing_XX in ovs interface is in kbit.
3)
Ovs use tc to set qos, so we can see output from tc command.
This patch is to unify the qos control and query on ovs ports.
The conversion explanation is added in this patch:
https://listman.redhat.com/archives/libvir-list/2021-August/msg00422.html And there are 6 following patches to fix some bugs. See
https://listman.redhat.com/archives/libvir-list/2021-August/msg00423.html ------- Best Regards, Jinsheng Zhang 发件人: Yalan Zhang [mailto:yalzhang@xxxxxxxxxx]
Hi Jinsheng, I have tested the patch and have some questions, could you please help to confirm? 1) For inbound, how to check it from the openvswitch side? tc will still show the statistics, is that expected? 2) For outbound, the peak is ignored. I just can not understand the "ingress_policing_burst: 2048", how can it come from
the setting "outbound.burst : 256"? 3) Is the output from tc command expected?
Test inbound: 1. start vm with setting as below: <interface type='bridge'> <bandwidth> <inbound average='100' peak='200' burst='256'/> </bandwidth> ... </interface> 2. # virsh domiftune rhel vnet5 inbound.average: 100 inbound.peak : 200 inbound.burst : 256 inbound.floor : 0 outbound.average: 0 outbound.peak : 0 outbound.burst : 0 # ip l 17: vnet5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc htb master ovs-system state UNKNOWN mode DEFAULT group default
qlen 1000 link/ether fe:54:00:4d:43:5a brd ff:ff:ff:ff:ff:ff # ovs-vsctl show interface …... ingress_policing_burst: 0 ingress_policing_kpkts_burst: 0 ingress_policing_kpkts_rate: 0 ingress_policing_rate: 0 …... name : vnet5 # tc -d class show dev vnet5 class htb 1:1 parent 1:fffe prio 0 quantum 10240 rate
819200bit ceil 1638Kbit linklayer ethernet burst 256Kb/1 mpu 0b cburst 256Kb/1 mpu 0b level 0 class htb 1:fffe root rate 1638Kbit ceil 1638Kbit linklayer ethernet burst 1499b/1 mpu 0b cburst 1499b/1 mpu 0b level 7 # tc -d filter show dev vnet5 parent ffff: (no outputs) For outbound: # virsh dumpxml rhel | grep /bandwidth -B2 <bandwidth> <outbound average='100' peak='200' burst='256'/> </bandwidth> # virsh domiftune rhel vnet9 inbound.average: 0 inbound.peak : 0 inbound.burst : 0 inbound.floor : 0 outbound.average: 100 outbound.peak : 200 outbound.burst : 256 # ovs-vsctl list interface ingress_policing_burst:
2048 ingress_policing_kpkts_burst: 0 ingress_policing_kpkts_rate: 0 ingress_policing_rate:
800 ... # tc -d filter show dev vnet9 parent ffff: filter protocol all pref 49 basic chain 0 filter protocol all pref 49 basic chain 0 handle 0x1
action order 1: police 0x1 rate 800Kbit burst 256Kb mtu 64Kb action drop/pipe overhead 0b linklayer unspec
ref 1 bind 1 # tc -d class show dev vnet9
(no outputs)
On Mon, Jul 12, 2021 at 3:43 PM Michal Prívozník <mprivozn@xxxxxxxxxx> wrote: On 7/9/21 3:31 PM, Jinsheng Zhang (张金生)-云服务集团 wrote: |