Hi, Am 21.09.18 um 03:28 schrieb ST Wong (ITSC): > Hi, > >>> Will the RAID 6 be mirrored to another storage in remote site for DR purpose? >> >> Not yet. Our goal is to have the backup ceph to which we will replicate spread across three different buildings, with 3 replicas. > > May I ask if the backup ceph is a single ceph cluster span across 3 different buildings, or compose of 3 ceph clusters in 3 different buildings? Thanks. > This will be a single ceph cluster with a failure domain corresponding to the building and three replicas. To test updates before rolling them out to the full cluster, we will also instantiate a small test cluster separately, but we try to keep the number of production clusters down and rather let Ceph handle failover and replication than doing that ourselves, which also allows to grow / shrink the cluster more easily as needed ;-). All the best, Oliver > Thanks again for your help. > Best Regards, > /ST Wong > > -----Original Message----- > From: Oliver Freyermuth <freyermuth@xxxxxxxxxxxxxxxxxx> > Sent: Thursday, September 20, 2018 2:10 AM > To: ST Wong (ITSC) <ST@xxxxxxxxxxxxxxxx> > Cc: Peter Wienemann <wienemann@xxxxxxxxxxxxxxxxxx>; ceph-users@xxxxxxxxxxxxxx > Subject: Re: backup ceph > > Hi, > > Am 19.09.18 um 18:32 schrieb ST Wong (ITSC): >> Thanks for your help. > > You're welcome! > I should also add we don't have very long-term experience with this yet - Benji is pretty modern. > >>> For the moment, we use Benji to backup to a classic RAID 6. >> Will the RAID 6 be mirrored to another storage in remote site for DR purpose? > > Not yet. Our goal is to have the backup ceph to which we will replicate spread across three different buildings, with 3 replicas. > >> >>> For RBD mirroring, you do indeed need another running Ceph Cluster, but we plan to use that in the long run (on separate hardware of course). >> Seems this is the way to go, regardless of additional resources required? :) >> Btw, RBD mirroring looks like a DR copy instead of a daily backup from which we can restore image of particular date ? > > We would still perform daily snapshots, and keep those both in the RBD mirror and in the Benji backup. Even when fading out the current RAID 6 machine at some point, > we'd probably keep Benji and direct it's output to a CephFS pool on our backup Ceph cluster. If anything goes wrong with the mirroring, this still leaves us > with an independent backup approach. We also keep several days of snapshots in the production RBD pool to be able to quickly roll back a VM if anything goes wrong. > With Benji, you can also mount any of these daily snapshots via NBD in case it is needed, or restore from a specific date. > > All the best, > Oliver > >> >> Thanks again. >> /st wong >> >> -----Original Message----- >> From: Oliver Freyermuth <freyermuth@xxxxxxxxxxxxxxxxxx> >> Sent: Wednesday, September 19, 2018 5:28 PM >> To: ST Wong (ITSC) <ST@xxxxxxxxxxxxxxxx> >> Cc: Peter Wienemann <wienemann@xxxxxxxxxxxxxxxxxx>; ceph-users@xxxxxxxxxxxxxx >> Subject: Re: backup ceph >> >> Hi, >> >> Am 19.09.18 um 03:24 schrieb ST Wong (ITSC): >>> Hi, >>> >>> Thanks for your information. >>> May I know more about the backup destination to use? As the size of the cluster will be a bit large (~70TB to start with), we're looking for some efficient method to do that backup. Seems RBD mirroring or incremental snapshot s with RBD (https://ceph.com/geen-categorie/incremental-snapshots-with-rbd/) are some ways to go, but requires another running Ceph cluster. Is my understanding correct? Thanks. >> >> For the moment, we use Benji to backup to a classic RAID 6. With Benji, only the changed chunks are backed up, and it learns that by asking Ceph for a diff of the RBD snapshots. >> So that's really fast after the first backup, and especially if you do trimming (e.g. via guest agent if you run VMs) of the RBD volumes before backing them up. >> The same is true for Backy2, but it does not support compression (which really helps by several factors(!) in saving I/O and with zstd it does not use much CPU). >> >> For RBD mirroring, you do indeed need another running Ceph Cluster, but we plan to use that in the long run (on separate hardware of course). >> >>> Btw, is this one (https://benji-backup.me/) Benji you'r referring to ? Thanks a lot. >> >> Exactly :-). >> >> Cheers, >> Oliver >> >>> >>> >>> >>> Cheers, >>> /ST Wong >>> >>> >>> >>> -----Original Message----- >>> From: Oliver Freyermuth <freyermuth@xxxxxxxxxxxxxxxxxx> >>> Sent: Tuesday, September 18, 2018 6:09 PM >>> To: ST Wong (ITSC) <ST@xxxxxxxxxxxxxxxx> >>> Cc: Peter Wienemann <wienemann@xxxxxxxxxxxxxxxxxx> >>> Subject: Re: backup ceph >>> >>> Hi, >>> >>> we're also just starting to collect experiences, so we have nothing to share (yet). However, we are evaluating using Benji (a well-maintained fork of Backy2 which can also compress) in addition, trimming and fsfreezing the VM disks shortly before, >>> and additionally keeping a few daily and weekly snapshots. >>> We may add RBD mirroring to a backup system in the future. >>> >>> Since our I/O requirements are not too high, I guess we will be fine either way, but any shared experience is very welcome. >>> >>> Cheers, >>> Oliver >>> >>> Am 18.09.18 um 11:54 schrieb ST Wong (ITSC): >>>> Hi, >>>> >>>> >>>> >>>> We're newbie to Ceph. Besides using incremental snapshots with RDB to backup data on one Ceph cluster to another running Ceph cluster, or using backup tools like backy2, will there be any recommended way to backup Ceph data ? Someone here suggested taking snapshot of RDB daily and keeps 30 days to replace backup. I wonder if this is practical and if performance will be impact. >>>> >>>> >>>> >>>> Thanks a lot. >>>> >>>> Regards >>>> >>>> /st wong >>>> >>>> >>>> _______________________________________________ >>>> ceph-users mailing list >>>> ceph-users@xxxxxxxxxxxxxx >>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >>>> >>> >> > >
Attachment:
smime.p7s
Description: S/MIME Cryptographic Signature
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com