Hi James,
On 1/11/22 11:03 AM, Finlayson, James M CIV (USA) wrote:
Hi,
Sorry this is a long read. If you want to get to the gist of it, look for "<KEY>" for key points. I'm having some issues with where to find information to troubleshoot mdraid performance issues. The latest "rathole" I'm going down is that I have two identically configured mdraids, 1 per NUMA node on a dual socket AMD Rome with "numas per socket" set to 1 in the BIOS. Things are cranking with a 64K blocksize but I have a substantial disparity between NUMA0's mdraid and NUMA1's.
[trim /]
Is there any chance your NVMe devices are installed asymmetrically on
your PCIe bus(ses) ?
try:
# lspci -tv
Might be illuminating. In my office server, the PCIe slots are routed
through one of the two sockets. The slots routed through socket 1
simply don't work when the second processor is not installed. Devices
in a socket 0 slot have to route through that CPU when the other CPU
talks to them, and vice versa.
Phil