I think I've found some suboptimal behaviour in the iSCSI target code, but I'd like another opinion.
Just as a caveat, this behaviour was first seen on a CentOS 7 kernel, but looking at the code I think it'll behave the same in master.
Basically, the issue is that the iSCSI target code creates a pair of kernel threads (one for tx, one for rx) for each connection. Each pair gets affined to the same logical CPU.
The problem is that this affinity does not reflect kernel boot args such as "isolcpus", "rcu_nocbs", or "irqaffinity". Instead, it seems to start at cpu0 and increment by one for each pair of kernel threads.
This seems less than ideal. If the sysadmin has tried to ensure certain CPUs are available for high-performance/low-latency work, it seems odd for the kernel to arbitrarily stick a pair of I/O threads on them.
Am I missing something? Is there a way to set limits on where these threads are placed?
Thanks, Chris -- To unsubscribe from this list: send the line "unsubscribe target-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html