Re: MDRAID NVMe performance question, but I don't know what I don't know

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi James,

On 1/11/22 11:03 AM, Finlayson, James M CIV (USA) wrote:
Hi,
Sorry this is a long read.   If you want to get to the gist of it, look for "<KEY>" for key points.   I'm having some issues with where to find information to troubleshoot mdraid performance issues.   The latest "rathole" I'm going down is that I have two identically configured mdraids, 1 per NUMA node on a dual socket AMD Rome with "numas per socket" set to 1 in the BIOS.   Things are cranking with a 64K blocksize but I have a substantial disparity between NUMA0's mdraid and NUMA1's.

[trim /]

Is there any chance your NVMe devices are installed asymmetrically on your PCIe bus(ses) ?

try:

# lspci -tv

Might be illuminating. In my office server, the PCIe slots are routed through one of the two sockets. The slots routed through socket 1 simply don't work when the second processor is not installed. Devices in a socket 0 slot have to route through that CPU when the other CPU talks to them, and vice versa.

Phil



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux