Re: RAID 5, 10 modern post 2020 drives, slow speeds

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I put an external bitmap on a raid1 SSD and that seemed to speed up my
writes.  I am not sure if external bitmaps will continue to be
supported as I have seen notes that I don't exactly understand for
external bitmaps, and I have to reapply the external bitmap on each
reboot for my arrays which has some data loss risks in a crash case
with a dirty bitmap.

This is the command I used to set it up.
mdadm --grow --force --bitmap=/mdraid-bitmaps/md15-bitmap.img /dev/md15

On Fri, Mar 7, 2025 at 1:25 PM David Hajes <d.hajes29a@xxxxx> wrote:
>
> Hi Roman,
>
> Thanks for reply.
>
> All drives tourist SATA3 CMR
>
> Link speed for LSI SAS2308 8GTs/8x
>
> Intel C224 chipset SATA controller
>
> Write speed is always same, no matter RAI level, chipset C224 or SAS connection.
>
> Read test for RAID10 is 414MBs
>
> I was hoping for higher writing speeds. What is interesting RAID5 in default setting does 220MBs while RAID10 struggles at 170MBs.
>
> There is something horribly wrong :o)
>
> So bitmap seems to be on. Mdstat says "bitmap: 0/204 pages 65M chunk
>
> --bitmap=none
>
> Write speed 170MBs on RAID10 with chunk 1MB
>
> Bitmap internal with chunk 128M write speed 170
>
>
> -------- Original Message --------
> On 07/03/2025 19:47, Roman Mamedov <rm@xxxxxxxxxxx> wrote:
>
> >  Hello,
> >
> >  On Fri, 07 Mar 2025 18:36:13 +0000
> >  David Hajes <d.hajes29a@xxxxx> wrote:
> >
> >  > I have issues with RAID5 running on post-2020 14TB drives.
> >  >
> >  > I am getting max writting speeds of 220MBs.
> >
> >  What about read speeds, do you get much more, or clamped in the same ballpark?
> >
> >  To not wait for a full resync just to check this (or various other settings),
> >  you can create the array with --assume-clean.
> >
> >  In case reads are also limited to the same value, I'd suspect PCIe bandwidth
> >  issues, such as the HBA getting choked by 2.5 GT/s x1 for whatever reason.
> >  Check the bandwidth in "lspci -vvv".
> >
> >  > I have played with chunk size...default 512k-2MBs...no difference
> >  >
> >  > "Read-ahead" set for md0 virtual disk
> >  >
> >  > NCQ disabled - set 1 for all physical drives
> >  >
> >  > I have basically tried every suggestion on famous ArchWiki.
> >
> >  Do you use the Write-Intent bitmap, and what is its chunk size?
> >  Try without one briefly, to see if this was the issue.
> >  For production use, increase the bitmap chunk size and see if that helps.
> >
> >  > Initial resync drops to 130MBs
> >
> >  Are your drives SMR or CMR? For SMR drives it is common to briefly write
> >  quickly and then slow down as they need to do their housekeeping during the
> >  same time as new writes. SMR are not recommended for RAID.
> >
> >  > Is it possible this weird issue is linked to HDD timeout described there >>> https://archive.kernel.org/oldwiki/raid.wiki.kernel.org/index.php/Timeout_Mismatch.html
> >
> >  No.
> >
> >  --
> >  With respect,
> >  Roman
> >
>





[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux