Re: Filesystem-based raid vs. device-based raid

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Am 20.09.18 um 21:52 schrieb David F:
> I can't imagine that this isn't a frequently asked question, but with my
> poor search skills, I've come up completely empty on this.
> 
> I'm trying to understand why the newer "sophisticated" filesystems (e.g.
> btrfs) are implementing raid redundancy as part of the filesystem rather
> than the traditional approach of a separate virtual-block-device layer
> such as md, with a filesystem on top of it as a distinct layer.  In
> addition to replication of effort/code [again and again for each new
> filesystem implementation that comes along], it seems to be mixing too
> much functionality into one monolithic layer, increasing complexity and
> the subsequent inevitable increased number of bugs and difficulty of
> debugging.
> 
> Of course, the people working on these filesystems aren't idiots, so I
> assume that there _are_ reasons

simple: after a drive failure

* rebuild a 4x6 TB mdraid with 10 GB used
* rebuild a 4x6 TB zfs/btrfs with 10 GB used

case 1 takes ages
case 2 is done within seconds

(if only BTRFS would get somehow reliable or ZFS have a proper license)



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux