Re: how do i fix these RAID5 arrays?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



November 27, 2022 at 12:46 PM, "Reindl Harald" <h.reindl@xxxxxxxxxxxxx> wrote:


> 
> Am 26.11.22 um 21:02 schrieb John Stoffel:
> 
> > 
> > I call it a failure of the layering model. If you want RAID, use MD.
> >  If you want logical volumes, then put LVM on top. Then put
> >  filesystems into logical volumes.
> >  So much simpler...
> > 
> 
> have you ever replaced a 6 TB drive and waited for the resync of mdadm in the hope in all that hours no other drive goes down?
> 
> when your array is 10% used it's braindead
> when your array is new and empty it's braindead
> 
> ZFS/BTRFS don't neeed to mirror/restore 90% nulls
>

You cannot consider the amount of data in the
array as parameter for reliability.

If the array is 99% full, MD or ZFS/BTRFS have
same behaviour, in terms of reliability.
If the array is 0% full, as well.

The only advantage is you wait less, if less
data is present (for ZFS/BTRFS).

Because the day that the ZFS/BTRFS is 99% full,
you got a resync and a failure you have also
double damage: lost array and 99% of the data.

Furthermore, non-layered systems, like those two,
tend to have dependent failures, in terms of
software bugs.

Layered systems have more isolation, bug propagation
is less likely.

Meaning that the risk of a software bug is much
higher, to happen and to have catastrophic effects,
for non-layered systems.

bye,

pg

-- 

piergiorgio sartor




[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux