Hi,
在 2023/09/02 4:23, CoolCold 写道:
Good day!
I have 4 NVMe new drives which are planned to replace 2 current NVMe
drives, serving primarily as MYSQL storage, Hetzner dedicated server
AX161 if it matters. Drives are SAMSUNG MZQL23T8HCLS-00A07, 3.8TB .
System - Ubuntu 20.04 / 5.4.0-153-generic #170-Ubuntu
So the strange thing I do observe, is its initial raid sync speed.
Created with:
mdadm --create /dev/md3 --run -b none --level=10 --layout=f2
--chunk=16 --raid-devices=4 /dev/nvme0n1 /dev/nvme4n1 /dev/nvme3n1
/dev/nvme5n1
sync speed:
md3 : active raid10 nvme5n1[3] nvme3n1[2] nvme4n1[1] nvme0n1[0]
7501212288 blocks super 1.2 16K chunks 2 far-copies [4/4] [UUUU]
[=>...................] resync = 6.2% (466905632/7501212288)
finish=207.7min speed=564418K/sec
Is there any read/write to the array? Because for raid10, normal io
can't concurrent with sync io, brandwidth will be bad if they both exit,
specially for old kernels.
Thanks,
Kuai
If I try to create RAID1 with just two drives - sync speed is around
3.2GByte per second, sysclt is tuned of course:
dev.raid.speed_limit_max = 8000000
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5]
[raid4] [raid10]
md70 : active raid1 nvme4n1[1] nvme5n1[0]
3750606144 blocks super 1.2 [2/2] [UU]
[>....................] resync = 1.5% (58270272/3750606144)
finish=19.0min speed=3237244K/sec
From iostat, drives are basically doing just READs, no writes.
Quick tests with fio, mounting single drive shows it can do around 30k
IOPS with 16kb ( fio --rw=write --ioengine=sync --fdatasync=1
--directory=test-data --size=8200m --bs=16k --name=mytest ) so likely
issue are not drives themselves.
Not sure where to look further, please advise.