Re: mdadm/raid5, spare disk or spare space

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 01/02/2025 at 20:45, Anton Gavriliuk wrote:

Here is raid5 setup during build time

     Number   Major   Minor   RaidDevice State
        0     259        3        0      active sync   /dev/nvme3n1
        1     259        4        1      active sync   /dev/nvme4n1
        3     259        5        2      spare rebuilding   /dev/nvme5n1

        4     259        6        -      spare   /dev/nvme6n1
        5     259        7        -      spare   /dev/nvme7n1

I'm asking because during build time according to iostat or sar,
writes occurs only on /dev/nvme5n1 and never on /dev/nvme3n1 and
/dev/nvme4n1.

So if parity is distributed in mdadm/raid5, why mdadm writes only on
/dev/nvme5n1 ?

Because, as you can see, nvme5n1 is being rebuilt from the two others.

From mdadm(1) man page:
"When creating a RAID5 array, mdadm will automatically create a degraded array with an extra spare drive. This is because building the spare into a degraded array is in general faster than resyncing the parity on a non-degraded, but not clean, array. This feature can be overridden with the --force option."

I guess this way is faster on hard disks because it implies sequential read on N-1 drives and sequential write on the other drive, sequential read and write speeds are similar and sequential write does not cause significant wear. But it may differ on SSDs, spreading reads and writes among all drives may be more efficient.




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux