Re: raid 5 install

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



On Jul 1, 2019, at 9:10 AM, mark <m.roth@xxxxxxxxx> wrote:
> 
> ZFS with a zpoolZ2

You mean raidz2.

> which we set up using the LSI card set to JBOD

Some LSI cards require a complete firmware re-flash to get them into “IT mode” which completely does away with the RAID logic and turns them into dumb SATA controllers. Consequently, you usually do this on the lowest-end models, since there’s no point paying for the expensive RAID features on the higher-end cards when you do this.

I point this out because there’s another path, which is to put each disk into a single-target “JBOD”, which is less efficient, since it means each disk is addressed indirectly via the RAID chipset, rather than as just a plain SATA disk.

You took the first path, I hope?

We gave up on IT-mode LSI cards when motherboards with two SFF-8087 connectors became readily available, giving easy 8-drive arrays.  No need for the extra board any more.

> took about 3 days and
> 8 hours for backing up a large project, while the same o/s, but with xfs
> on an LSI-hardware RAID 6, took about 10 hours less. Hardware RAID is
> faster.

I doubt the speed difference is due to hardware vs software.  The real difference you tested there is ZFS vs XFS, and you should absolutely expect to pay some performance cost with ZFS.  You’re getting a lot of features in trade.

I wouldn’t expect the difference to be quite that wide, by the way.  That brings me back to my guess about IT mode vs RAID JBOD mode on your card.

Anyway, one of those compensating benefits are snapshot-based backups.

Before starting the first backup, set a ZFS snapshot.  Do the backup with a “zfs send” of the snapshot, rather than whatever file-level backup tool you were using before.  When that completes, create another snapshot and send *that* snapshot.  This will complete much faster, because ZFS uses the two snapshots to compute the set of changed blocks between the two snapshots and sends only the changed blocks.

This is a sub-file level backup, so that if a 1 kB header changes in a 2 GB data file, you send only one block’s worth of data to the backup server, since you’ll be using a block size bigger than 1 kB, and that header — being a *header* — won’t straddle two blocks.  This is excellent for filesystems with large files that change in small areas, like databases.

You might say, “I can do that with rsync already,” but with rsync, you have to compute this delta on each backup, which means reading all of the blocks on *both* sides of the backup.  ZFS snapshots keep that information continuously as the filesystem runs, so there is nothing to compute at the beginning of the backup.

rsync’s delta compression primarily saves time only when the link between the two machines is much slower than the disks on either side, so that the delta computation overhead gets swamped by the bottleneck’s delays.

With ZFS, the inter-snapshot delta computation is so fast that you can use it even when you’ve got two servers sitting side by side with a high-bandwidth link between them.

Once you’ve got a scheme like this rolling, you can do backups very quickly, possibly even sub-minute.

And you don’t have to script all of this yourself.  There are numerous pre-built tools to automate this.  We’ve been happy users of Sanoid, which does both the automatic snapshot and automatic replication parts:

    https://github.com/jimsalterjrs/sanoid

Another nice thing about snapshot-based backups is that they’re always consistent: just as you can reboot a ZFS based system at any time and have it reboot into a consistent state, you can take a snapshot and send it to another machine, and it will be just as consistent.

Contrast something like rsync, which is making its decisions about what to send on a per-file basis, so that it simply cannot be consistent unless you stop all of the apps that can write to the data store you’re backing up.

Snapshot based backups can occur while the system is under a heavy workload.  A ZFS snapshot is nearly free to create, and once set, it freezes the data blocks in a consistent state.  This benefit falls out nearly for free with a copy-on-write filesystem.

Now that you’re doing snapshot-based backups, you’re immune to crypto malware, as long as you keep your snapshots long enough to cover your maximum detection window. Someone just encrypted all your stuff?  Fine, roll it back.  You don’t even have to go to the backup server.

> when one fails, "identify" rarely works, which means use smartctl
> or MegaCli64 (or the lsi script) to find the s/n of the drive, then
> guess…

It’s really nice when you get a disk status report and the missing disk is clear from the labels:

   left-1:  OK
   left-2:  OK
   left-4:  OK
   right-1: OK
   right-2: OK
   right-3: OK
   right-4: OK

Hmmm, which disk died, I wonder?  Gotta be left-3!  No need to guess, the system just told you in human terms, rather than in abstract hardware terms.
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
https://lists.centos.org/mailman/listinfo/centos




[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]


  Powered by Linux