Re: how do i fix these RAID5 arrays?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





Am 27.11.22 um 15:10 schrieb piergiorgio.sartor@xxxxxxxx:
November 27, 2022 at 12:46 PM, "Reindl Harald" <h.reindl@xxxxxxxxxxxxx> wrote:


Am 26.11.22 um 21:02 schrieb John Stoffel:


I call it a failure of the layering model. If you want RAID, use MD.
  If you want logical volumes, then put LVM on top. Then put
  filesystems into logical volumes.
  So much simpler...


have you ever replaced a 6 TB drive and waited for the resync of mdadm in the hope in all that hours no other drive goes down?

when your array is 10% used it's braindead
when your array is new and empty it's braindead

ZFS/BTRFS don't neeed to mirror/restore 90% nulls


You cannot consider the amount of data in the
array as parameter for reliability

If the array is 99% full, MD or ZFS/BTRFS have
same behaviour, in terms of reliability.
If the array is 0% full, as well

you completly miss the point!

if your mdadm-array is built with 6 TB drivres wehn you replace a drive you need to sync 6 TB no matter if 10 MB or 5 TB are actually used



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux