> -----Original Message----- > From: Sreekanth Reddy [mailto:sreekanth.reddy@xxxxxxxxxxxx] > Sent: Friday, August 19, 2016 6:45 AM > To: Elliott, Robert (Persistent Memory) <elliott@xxxxxxx> > Subject: Re: Observing Softlockup's while running heavy IOs > ... > Yes I am also observing that all the interrupts are routed to one > CPU. But still I observing softlockups (sometime hardlockups) > even when I set rq_affinity to 2. That'll ensure the block layer's completion handling is done there, but not your driver's interrupt handler (which precedes the block layer completion handling). > Is their any way to route the interrupts the same CPUs which has > submitted the corresponding IOs? > or > Is their any way/option in the irqbalance/kernel which can route > interrupts to CPUs (enabled in affinity_hint) in round robin manner > after specific time period. Ensure your driver creates one MSIX interrupt per CPU core, uses that interrupt for all submissions from that core, and reports that it would like that interrupt to be serviced by that core in /proc/irq/nnn/affinity_hint. Even with hyperthreading, this needs to be based on the logical CPU cores, not just the physical core or the physical socket. You can swamp a logical CPU core as easily as a physical CPU core. Then, provide an irqbalance policy script that honors the affinity_hint for your driver, or turn off irqbalance and manually set /proc/irq/nnn/smp_affinity to match the affinity_hint. Some versions of irqbalance honor the hints; some purposely don't and need to be overridden with a policy script. --- Robert Elliott, HPE Persistent Memory ��.n��������+%������w��{.n�����{������ܨ}���Ơz�j:+v�����w����ޙ��&�)ߡ�a����z�ޗ���ݢj��w�f