[root@hornet04 ~]# lstopo -v | egrep -i 'numa|pci|bridge' NUMANode L#0 (P#0 local=263873404KB total=263873404KB) HostBridge L#0 (buses=0000:[00-06]) PCIBridge L#1 (busid=0000:00:01.3 id=1022:1483 class=0604(PCIBridge) link=7.88GB/s buses=0000:[03-03]) PCI L#0 (busid=0000:03:00.0 id=144d:a824 class=0108(NVMExp) link=7.88GB/s PCISlot=15) PCIBridge L#2 (busid=0000:00:01.4 id=1022:1483 class=0604(PCIBridge) link=7.88GB/s buses=0000:[04-04]) PCI L#1 (busid=0000:04:00.0 id=144d:a824 class=0108(NVMExp) link=7.88GB/s PCISlot=16) PCIBridge L#3 (busid=0000:00:01.5 id=1022:1483 class=0604(PCIBridge) link=7.88GB/s buses=0000:[05-05]) PCI L#2 (busid=0000:05:00.0 id=144d:a824 class=0108(NVMExp) link=7.88GB/s PCISlot=17) PCIBridge L#4 (busid=0000:00:01.6 id=1022:1483 class=0604(PCIBridge) link=7.88GB/s buses=0000:[06-06]) PCI L#3 (busid=0000:06:00.0 id=144d:a824 class=0108(NVMExp) link=7.88GB/s PCISlot=18) HostBridge L#5 (buses=0000:[20-27]) PCIBridge L#6 (busid=0000:20:01.1 id=1022:1483 class=0604(PCIBridge) link=7.88GB/s buses=0000:[23-23]) PCI L#4 (busid=0000:23:00.0 id=144d:a824 class=0108(NVMExp) link=7.88GB/s PCISlot=7) PCIBridge L#7 (busid=0000:20:01.2 id=1022:1483 class=0604(PCIBridge) link=7.88GB/s buses=0000:[24-24]) PCI L#5 (busid=0000:24:00.0 id=144d:a824 class=0108(NVMExp) link=7.88GB/s PCISlot=8-1) PCIBridge L#8 (busid=0000:20:01.3 id=1022:1483 class=0604(PCIBridge) link=7.88GB/s buses=0000:[25-25]) PCI L#6 (busid=0000:25:00.0 id=144d:a824 class=0108(NVMExp) link=7.88GB/s PCISlot=9) PCIBridge L#9 (busid=0000:20:01.4 id=1022:1483 class=0604(PCIBridge) link=7.88GB/s buses=0000:[26-26]) PCI L#7 (busid=0000:26:00.0 id=144d:a824 class=0108(NVMExp) link=7.88GB/s PCISlot=10-1) PCIBridge L#10 (busid=0000:20:03.1 id=1022:1483 class=0604(PCIBridge) link=15.75GB/s buses=0000:[27-27]) PCI L#8 (busid=0000:27:00.0 id=15b3:1017 class=0200(Ethernet) link=15.75GB/s PCISlot=1) PCI L#9 (busid=0000:27:00.1 id=15b3:1017 class=0200(Ethernet) link=15.75GB/s PCISlot=1) HostBridge L#11 (buses=0000:[40-45]) PCIBridge L#12 (busid=0000:40:01.1 id=1022:1483 class=0604(PCIBridge) link=7.88GB/s buses=0000:[43-43]) PCI L#10 (busid=0000:43:00.0 id=144d:a824 class=0108(NVMExp) link=7.88GB/s PCISlot=3) PCIBridge L#13 (busid=0000:40:01.2 id=1022:1483 class=0604(PCIBridge) link=7.88GB/s buses=0000:[44-44]) PCI L#11 (busid=0000:44:00.0 id=144d:a824 class=0108(NVMExp) link=7.88GB/s PCISlot=4) PCIBridge L#14 (busid=0000:40:01.3 id=1022:1483 class=0604(PCIBridge) link=7.88GB/s buses=0000:[45-45]) PCI L#12 (busid=0000:45:00.0 id=15b3:1017 class=0200(Ethernet) link=7.88GB/s PCISlot=10) PCI L#13 (busid=0000:45:00.1 id=15b3:1017 class=0200(Ethernet) link=7.88GB/s PCISlot=10) HostBridge L#15 (buses=0000:[60-65]) PCIBridge L#16 (busid=0000:60:03.2 id=1022:1483 class=0604(PCIBridge) link=7.88GB/s buses=0000:[64-64]) PCI L#14 (busid=0000:64:00.0 id=144d:a824 class=0108(NVMExp) link=7.88GB/s PCISlot=1-1) PCIBridge L#17 (busid=0000:60:03.3 id=1022:1483 class=0604(PCIBridge) link=7.88GB/s buses=0000:[65-65]) PCI L#15 (busid=0000:65:00.0 id=144d:a824 class=0108(NVMExp) link=7.88GB/s PCISlot=2) PCIBridge L#18 (busid=0000:60:05.2 id=1022:1483 class=0604(PCIBridge) link=0.50GB/s buses=0000:[61-61]) PCI L#16 (busid=0000:61:00.1 id=102b:0538 class=0300(VGA) link=0.50GB/s) NUMANode L#1 (P#1 local=264165280KB total=264165280KB) HostBridge L#19 (buses=0000:[a0-a6]) PCIBridge L#20 (busid=0000:a0:03.3 id=1022:1483 class=0604(PCIBridge) link=7.88GB/s buses=0000:[a3-a3]) PCI L#17 (busid=0000:a3:00.0 id=144d:a824 class=0108(NVMExp) link=7.88GB/s PCISlot=31) PCIBridge L#21 (busid=0000:a0:03.4 id=1022:1483 class=0604(PCIBridge) link=7.88GB/s buses=0000:[a4-a4]) PCI L#18 (busid=0000:a4:00.0 id=144d:a824 class=0108(NVMExp) link=7.88GB/s PCISlot=32) PCIBridge L#22 (busid=0000:a0:03.5 id=1022:1483 class=0604(PCIBridge) link=7.88GB/s buses=0000:[a5-a5]) PCI L#19 (busid=0000:a5:00.0 id=144d:a824 class=0108(NVMExp) link=7.88GB/s PCISlot=33) PCIBridge L#23 (busid=0000:a0:03.6 id=1022:1483 class=0604(PCIBridge) link=7.88GB/s buses=0000:[a6-a6]) PCI L#20 (busid=0000:a6:00.0 id=144d:a824 class=0108(NVMExp) link=7.88GB/s PCISlot=34) HostBridge L#24 (buses=0000:[c0-c8]) PCIBridge L#25 (busid=0000:c0:01.1 id=1022:1483 class=0604(PCIBridge) link=3.94GB/s buses=0000:[c3-c3]) PCI L#21 (busid=0000:c3:00.0 id=1b4b:2241 class=0108(NVMExp) link=3.94GB/s PCISlot=8) PCIBridge L#26 (busid=0000:c0:03.1 id=1022:1483 class=0604(PCIBridge) link=7.88GB/s buses=0000:[c5-c5]) PCI L#22 (busid=0000:c5:00.0 id=144d:a824 class=0108(NVMExp) link=7.88GB/s PCISlot=23) PCIBridge L#27 (busid=0000:c0:03.2 id=1022:1483 class=0604(PCIBridge) link=7.88GB/s buses=0000:[c6-c6]) PCI L#23 (busid=0000:c6:00.0 id=144d:a824 class=0108(NVMExp) link=7.88GB/s PCISlot=24) PCIBridge L#28 (busid=0000:c0:03.3 id=1022:1483 class=0604(PCIBridge) link=7.88GB/s buses=0000:[c7-c7]) PCI L#24 (busid=0000:c7:00.0 id=144d:a824 class=0108(NVMExp) link=7.88GB/s PCISlot=25) PCIBridge L#29 (busid=0000:c0:03.4 id=1022:1483 class=0604(PCIBridge) link=7.88GB/s buses=0000:[c8-c8]) PCI L#25 (busid=0000:c8:00.0 id=144d:a824 class=0108(NVMExp) link=7.88GB/s PCISlot=26) HostBridge L#30 (buses=0000:[e0-e6]) PCIBridge L#31 (busid=0000:e0:03.1 id=1022:1483 class=0604(PCIBridge) link=7.88GB/s buses=0000:[e5-e5]) PCI L#26 (busid=0000:e5:00.0 id=144d:a824 class=0108(NVMExp) link=7.88GB/s PCISlot=21) PCIBridge L#32 (busid=0000:e0:03.2 id=1022:1483 class=0604(PCIBridge) link=7.88GB/s buses=0000:[e6-e6]) PCI L#27 (busid=0000:e6:00.0 id=144d:a824 class=0108(NVMExp) link=7.88GB/s PCISlot=22) PCIBridge L#33 (busid=0000:e0:03.3 id=1022:1483 class=0604(PCIBridge) link=7.88GB/s buses=0000:[e3-e3]) PCI L#28 (busid=0000:e3:00.0 id=144d:a824 class=0108(NVMExp) link=7.88GB/s PCISlot=19) PCIBridge L#34 (busid=0000:e0:03.4 id=1022:1483 class=0604(PCIBridge) link=7.88GB/s buses=0000:[e4-e4]) PCI L#29 (busid=0000:e4:00.0 id=144d:a824 class=0108(NVMExp) link=7.88GB/s PCISlot=20) -----Original Message----- From: Phil Turmel <philip@xxxxxxxxxx> Sent: Tuesday, January 11, 2022 3:35 PM To: Finlayson, James M CIV (USA) <james.m.finlayson4.civ@xxxxxxxx>; linux-raid@xxxxxxxxxxxxxxx Subject: [Non-DoD Source] Re: MDRAID NVMe performance question, but I don't know what I don't know Hi James, On 1/11/22 11:03 AM, Finlayson, James M CIV (USA) wrote: > Hi, > Sorry this is a long read. If you want to get to the gist of it, look for "<KEY>" for key points. I'm having some issues with where to find information to troubleshoot mdraid performance issues. The latest "rathole" I'm going down is that I have two identically configured mdraids, 1 per NUMA node on a dual socket AMD Rome with "numas per socket" set to 1 in the BIOS. Things are cranking with a 64K blocksize but I have a substantial disparity between NUMA0's mdraid and NUMA1's. [trim /] Is there any chance your NVMe devices are installed asymmetrically on your PCIe bus(ses) ? try: # lspci -tv Might be illuminating. In my office server, the PCIe slots are routed through one of the two sockets. The slots routed through socket 1 simply don't work when the second processor is not installed. Devices in a socket 0 slot have to route through that CPU when the other CPU talks to them, and vice versa. Phil