Hello, On Fri, 07 Mar 2025 18:36:13 +0000 David Hajes <d.hajes29a@xxxxx> wrote: > I have issues with RAID5 running on post-2020 14TB drives. > > I am getting max writting speeds of 220MBs. What about read speeds, do you get much more, or clamped in the same ballpark? To not wait for a full resync just to check this (or various other settings), you can create the array with --assume-clean. In case reads are also limited to the same value, I'd suspect PCIe bandwidth issues, such as the HBA getting choked by 2.5 GT/s x1 for whatever reason. Check the bandwidth in "lspci -vvv". > I have played with chunk size...default 512k-2MBs...no difference > > "Read-ahead" set for md0 virtual disk > > NCQ disabled - set 1 for all physical drives > > I have basically tried every suggestion on famous ArchWiki. Do you use the Write-Intent bitmap, and what is its chunk size? Try without one briefly, to see if this was the issue. For production use, increase the bitmap chunk size and see if that helps. > Initial resync drops to 130MBs Are your drives SMR or CMR? For SMR drives it is common to briefly write quickly and then slow down as they need to do their housekeeping during the same time as new writes. SMR are not recommended for RAID. > Is it possible this weird issue is linked to HDD timeout described there >>> https://archive.kernel.org/oldwiki/raid.wiki.kernel.org/index.php/Timeout_Mismatch.html No. -- With respect, Roman