Re: Recommended filesystem for RAID 6

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 11 Aug 2020 20:57:15 +0200
Reindl Harald <h.reindl@xxxxxxxxxxxxx> wrote:

> > Whichever filesystem you choose, you will end up with a huge single point of
> > failure, and any trouble with that FS or the underlying array put all your
> > data instantly at risk. 
> 
> calling an array where you can lose *two* disks as
> single-point-of-failure is absurd

As noted before, not just *disks* can fail, plenty of other things to fail in
a storage server, and they can easily take down, say, a half of all disks, or
random portions of them in increments of four. Even if temporarily -- that
surely will be "unexpected" to that single precious 20TB filesystem. How will
it behave, who knows. Do you know? For added fun, reconnect the drives back 30
seconds later. Oh, let's write to linux-raid for how to bring back a half of
RAID6 from the Spare (S) status. Or find some HOWTO suggesting a random
--create without --assume-clean. And if the FS goes corrupt, now you suddenly
need *all* your backups, not just 1 drive worth of them.

> no raid can replace backups anyways

All too often I've seen RAID being used as an implicit excuse to be lenient
about backups. Heck, I know from personal experience how enticing that can be.

> > Most likely you do not. And the RAID's main purpose in that case is to just
> > have a unified storage pool, for the convenience of not having to manage free
> > space across so many drives. But given the above, I would suggest leaving the
> > drives with their individual FSes, and just running MergerFS on top: 
> > https://www.teknophiles.com/2018/02/19/disk-pooling-in-linux-with-mergerfs/
> 
> you just move the complexity to something not used by many people for
> what exactly to gain? the rives are still in the same machine

To gain total independence of drives from each other, you can pull any drive
out of the machine, plug it in somewhere else, and it will have a proper
filesystem and readable files on it. Writable, even.

Compared to a 16-drive RAID6, where you either have a whopping 14 disks
present, connected, powered, spinning, healthy, online, OR you have useless
binary junk instead of any data.

Of course I do not insist my way is the best for everyone, but I hope now you
can better understand the concerns and reasons for choosing it. :)

-- 
With respect,
Roman



[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux