Re: raid10, far layout initial sync slow + XFS question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



For statistics, the same everything except layout:

Offset: around 700-780MB/sec
created: mdadm --create /dev/md3 --run -b none --level=10 --layout=o2
--chunk=16 --raid-devices=4 /dev/nvme0n1 /dev/nvme4n1 /dev/nvme3n1
/dev/nvme5n1

md3 : active raid10 nvme5n1[3] nvme3n1[2] nvme4n1[1] nvme0n1[0]
      7501212288 blocks super 1.2 16K chunks 2 offset-copies [4/4] [UUUU]
      [>....................]  resync =  1.5% (119689152/7501212288)
finish=156.3min speed=786749K/sec

near:around 700MB/sec
created: mdadm --create /dev/md3 --run -b none --level=10 --layout=n2
--chunk=16 --raid-devices=4 /dev/nvme0n1 /dev/nvme4n1 /dev/nvme3n1
/dev/nvme5n1

md3 : active raid10 nvme5n1[3] nvme3n1[2] nvme4n1[1] nvme0n1[0]
      7501212320 blocks super 1.2 16K chunks 2 near-copies [4/4] [UUUU]
      [>....................]  resync =  0.5% (42373104/7501212320)
finish=175.7min speed=707262K/sec

On Sat, Sep 2, 2023 at 3:23 AM CoolCold <coolthecold@xxxxxxxxx> wrote:
>
> Good day!
>
> I have 4 NVMe new drives which are planned to replace 2 current NVMe
> drives, serving primarily as MYSQL storage, Hetzner dedicated server
> AX161 if it matters. Drives are SAMSUNG MZQL23T8HCLS-00A07, 3.8TB .
> System - Ubuntu 20.04 / 5.4.0-153-generic #170-Ubuntu
>
> So the strange thing I do observe, is its initial raid sync speed.
> Created with:
> mdadm --create /dev/md3 --run -b none --level=10 --layout=f2
> --chunk=16 --raid-devices=4 /dev/nvme0n1 /dev/nvme4n1 /dev/nvme3n1
> /dev/nvme5n1
>
> sync speed:
>
> md3 : active raid10 nvme5n1[3] nvme3n1[2] nvme4n1[1] nvme0n1[0]
>       7501212288 blocks super 1.2 16K chunks 2 far-copies [4/4] [UUUU]
>       [=>...................]  resync =  6.2% (466905632/7501212288)
> finish=207.7min speed=564418K/sec
>
> If I try to create RAID1 with just two drives - sync speed is around
> 3.2GByte per second, sysclt is tuned of course:
> dev.raid.speed_limit_max = 8000000
>
> Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5]
> [raid4] [raid10]
> md70 : active raid1 nvme4n1[1] nvme5n1[0]
>       3750606144 blocks super 1.2 [2/2] [UU]
>       [>....................]  resync =  1.5% (58270272/3750606144)
> finish=19.0min speed=3237244K/sec
>
> From iostat, drives are basically doing just READs, no writes.
> Quick tests with fio, mounting single drive shows it can do around 30k
> IOPS with 16kb ( fio --rw=write --ioengine=sync --fdatasync=1
> --directory=test-data --size=8200m --bs=16k --name=mytest ) so likely
> issue are not drives themselves.
>
> Not sure where to look further, please advise.
>
> --
> Best regards,
> [COOLCOLD-RIPN]



-- 
Best regards,
[COOLCOLD-RIPN]




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux