Re: OSD backups and recovery

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jarett;

It is and it isn't.  Replication can be thought of as continuous backups.

Backups, especially as SpiderFox is suggesting, are point-in-time, immutable copies of data.  Until they are written over, they don't change, even if the data does.

In Ceph's RadosGW (RGW) multi-site replication changes, even "bad" changes, are pushed to the peer as quickly as the system can manage.  Even the "replication" occurring within a cluster can be considered a "backup," sort of.

As SpiderFox suggested, if malware is able to delete or encrypt the files, either through  RGW, RADOS, or on the underlying block device, you've got problems.  Note though; if they bypass RGW, then (AFAIK), the changes won't be replicated to the peer.

That's why I talk about disaster recovery.  Backing up is one disaster recovery technique, and is still perfectly valid.  Perform Air maintains backups of our Active Directory domain controllers, for instance.

Clustering, and off-site replication, are other disaster recovery paradigms.  Each has advantages and disadvantages.

Ultimately, as long as everything works, I believe the only wrong disaster recovery plan is doing nothing.

Thank you,

Dominic L. Hilsbos, MBA 
Director – Information Technology 
Perform Air International, Inc.
DHilsbos@xxxxxxxxxxxxxx 
www.PerformAir.com



-----Original Message-----
From: Jarett DeAngelis [mailto:jarett@xxxxxxxxxxxx] 
Sent: Friday, May 29, 2020 5:02 PM
To: Dominic Hilsbos
Cc: ludek.navratil@xxxxxxxxxxx; ceph-users@xxxxxxx
Subject: Re:  OSD backups and recovery

For some reason I’d thought replication between clusters was an “official” method of backing up.

> On May 29, 2020, at 4:31 PM, <DHilsbos@xxxxxxxxxxxxxx> <DHilsbos@xxxxxxxxxxxxxx> wrote:
> 
> Ludek;
> 
> As a cluster system, Ceph isn't really intended to be backed up.  It's designed to take quite a beating, and preserve your data.
> 
> From a broader disaster recovery perspective, here's how I architected my clusters:
> Our primary cluster is laid out in such a way that an entire rack can fail without read / write being impacted, much less data integrity.  On top of that, our RadosGW was a multi-site setup which automatically sends a copy of every object to a second cluster at a different location.
> 
> Thus my disaster recovery looks like this:
> 1 rack or less: no user impact, rebuild rack
> 2 racks: users are unable to add objects, but existing data is safe, 
> rebuild cluster (or as below) Whole site: switch second site to master 
> and continue
> 
> No backup or recovery necessary.
> 
> You might look the multi-site documentation: 
> https://docs.ceph.com/docs/master/radosgw/multisite/
> 
> I had a long conversation with our owner on this same topic, and how the organization would have to move from a "Backup & Recover" mindset to a "Disaster Recovery" mindset.  It worked well for us, as we were looking to move more towards Risk Analysis based approaches anyway.
> 
> Thank you,
> 
> Dominic L. Hilsbos, MBA
> Director – Information Technology
> Perform Air International, Inc.
> DHilsbos@xxxxxxxxxxxxxx
> www.PerformAir.com
> 
> 
> 
> -----Original Message-----
> From: Ludek Navratil [mailto:ludek.navratil@xxxxxxxxxxx] 
> Sent: Wednesday, February 5, 2020 6:57 AM
> To: ceph-users@xxxxxxx
> Subject:  OSD backups and recovery
> 
> HI all,
> what is the best approach for OSD backups and recovery? We use only Radosgw with S3 API and I need to backup the content of S3 buckets. Currently I sync s3 buckets to local filesystem and backup the content using Amanda.
> I believe that there must a better way to do this but I couldn't find it in docs. 
> 
> I know that one option is to setup an archive zone, but it requires an additional ceph cluster that needs to be maintained and looked after. I would rather avoid that.
> 
> How can I backup an entire Ceph cluster? Or individual OSDs in the way that will allow me to recover the data correctly?  
> 
> Many thanks,Ludek
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux