Fw: Re: RAID 5, 10 modern post 2020 drives, slow speeds

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Of course, but not by 150%

4 drives RAID10 suppose to be running theoretically 300-500MBs...not 120MBs.

Most modern drives do 150-250MBs inside to outside tracks


-------- Original Message -------- 
On 13/03/2025 03:54, Dragan Milivojević wrote: 
Speed drops as you approach the end of the disk. 

On Wed, 12 Mar 2025 at 22:10, David Hajes <d.hajes29a@xxxxx> wrote: 
Update on issue. I came across the "mismatch_cnt" stat after init resync or just common check/scrub. 

mismatch_cnt suppose to be 0. I had counter reaching millions. I haven't found definitive answer whether "mismatch_cnt" bad, and should be ignored. 

Allegedly high mismatch_cnt count suggest HW  or SW issues. I run SMART, no system corruption. 

I have tried to run "repair". Repair 400-500MBs, and mismatch_cnt is now 0. 

Initial resync started at 500MBs, 20 min later dropped to 200 MBs. 3 days later speed was at 120MBs. 

I will try some real tests to see if array is still fast in real life as well. 

Hajes 


-------- Original Message -------- 
On 08/03/2025 18:59, David Hajes <d.hajes29a@xxxxx> wrote: 

>  
>  In case someone wonders in future why SW RAID5 or 10 is slow. 
>  
>  Unless, two or more process do not write in parallel - ARRAY SPEED WILL ALWAYS BE SINGLE DRIVE LIMIT. 
>  
>  Basically, any single user operations of storing data on the array will became as writing to single drive. 
>  
>  In case of modern SATA HDDs, it would be 120-220MBs 
>  
>  Only the HW RAID controllers are allegedly capable to write on more than one drive parallel thus achiving the logical/envisioned/intuitive speed. 
>  
>  based on theory where at least two chunks are written at once to the two different drives thus doubling the writing speed as confussingly written in all RAID wikis. 
>  
>  
>  -------- Original Message -------- 
>  On 07/03/2025 21:47, Roman Mamedov <rm@xxxxxxxxxxx> wrote: 
>  
>  >  On Fri, 7 Mar 2025 14:42:24 -0600 
>  >  Roger Heflin <rogerheflin@xxxxxxxxx> wrote: 
>  >  
>  >  > I put an external bitmap on a raid1 SSD and that seemed to speed up my 
>  >  > writes.  I am not sure if external bitmaps will continue to be 
>  >  > supported as I have seen notes that I don't exactly understand for 
>  >  > external bitmaps, and I have to reapply the external bitmap on each 
>  >  > reboot for my arrays which has some data loss risks in a crash case 
>  >  > with a dirty bitmap. 
>  >  > 
>  >  > This is the command I used to set it up. 
>  >  > mdadm --grow --force --bitmap=/mdraid-bitmaps/md15-bitmap.img /dev/md15 
>  >  
>  >  In this case the result cited seems to have shown the bitmap is not the issue. 
>  >  
>  >  I remember seeing patches or talks to remove external bitmap support, too. 
>  >  
>  >  In my experience the internal bitmap with a large enough chunk size does not 
>  >  slow down the write speed that much. Try a chunk size of 256M. Not sure how 
>  >  high it's worth going before the benefits diminish. 
>  >  
>  >  -- 
>  >  With respect, 
>  >  Roman 
>  >  







[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux