Want to know some details about MSI queues.
Looked into many only online references for MSI, they points mainly to introduction area.
I am looking for rule/decision criteria for distributing traffic on different MSI queues.
Can we configure MSI queue at runtime for traffic distribution.
On my machine I can see below data in /proc/interrupts
64: 218601 13334 25585 7505 PCI-MSI-edge eth0-0
65: 5717754 6491556 501052 729091 PCI-MSI-edge eth0-1
66: 844740 9897919 96 230 PCI-MSI-edge eth0-2
67: 222750 436800 403 1205846 PCI-MSI-edge eth0-3
68: 777 1502281 314536 100125 PCI-MSI-edge eth0-4
69: 482616 431247 2164970 1627540 PCI-MSI-edge eth0-5
70: 323501 1433873 81970 18359 PCI-MSI-edge eth0-6
71: 37298 35844 8271 18516 PCI-MSI-edge eth0-7
When send UDP packets with IP X.X.X.100 interrupt happens in eth0-1 and with IP X.X.X.101 interrupts happens in eth0-5.
Similary for Ip X.X.X.102 and X.X.X.104, interrupt happens on distinct MSI queue...Seems like modules 4 operation here being criteria.
Can we have vlan based MSI queue rules?
When I set the core affinity, with /proc/irq/<intrrpt no>/smp_affinity. It changes on its own when I send burst of UPD packets with above IP as dest. Is this expected?
Thanks
Mukesh
_______________________________________________ Kernelnewbies mailing list Kernelnewbies@xxxxxxxxxxxxxxxxx http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies