On Thu, Aug 18, 2011 at 7:58 PM, Gilboa Davara <gilboad@xxxxxxxxx> wrote: > On Thursday, August 18, 2011, Dan Track <dan.track@xxxxxxxxx> wrote: >> Hi, >> >> I've been given a mask of "4040" to set as the smp_affinity for the >> eth0 (irq 83),eth1 (irq 91), eth6 (irq 99) and eth8 (irq 131) cards in >> my box. I'm struggling to understand how the mask "4040" relates to >> cpu i.e. the range of cpus I have is CPU0-CPU15 in /proc/interrupts. >> Can someone please describe the process to map this value to a >> processor? >> >> Currently I have 16 processors i.e. 4 cores on each cpu. >> >> Thanks for your help. Sorry for the empty replay. Pressed send too soon. Say you want have eth0 which uses IRQ 121 and you want to limit the IRQ handling for this device to CPU0. In this case, the you light up bit 0, or smp_affinity of 0x01. Say you want to limit the IRQ handling for the same device to CPU cores 0 and 7, you light up the first and 8'th bit or smp_affinity value of 0x81. In general you'd want to *reduce* the number of CPUs servicing IRQs from each device, and use multi-queue (whenever possible) to distribute the load (or IRQs) generated by a single card to multiple CPUs. In general, I place a single CPU core per 1GbE and 4 CPU cores (using multi-queue) for each 10GbE link. However, this greatly depends on the type of traffic / packet size and application used. Hope I helped, - Gilboa -- users mailing list users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe or change subscription options: https://admin.fedoraproject.org/mailman/listinfo/users Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines