Re: OSD backups and recovery

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



SpiderFox;

If you're concerned about ransomware (and you should be), then you should:
a) protect the cluster from the internet AND from USERS.
b) place another technology between your cluster and your users (I use Nextcloud backed by RadosGW through S3 buckets)
c) turn on versioning in your buckets

If you have a backup solution that can handle petabytes of data reliably, then certainly use it.  Everything I've tried fell over dead at a couple dozen terabytes.

Nothing is fool proof, not even the vaunted offline backup (ever try to do a recover and find the tape can't be read?).

Thank you,

Dominic L. Hilsbos, MBA 
Director – Information Technology 

DHilsbos@xxxxxxxxxxxxxx 
300 S. Hamilton Pl. 
Gilbert, AZ 85233 
Phone: (480) 610-3500 
Fax: (480) 610-3501 
www.PerformAir.com



-----Original Message-----
From: Coding SpiderFox [mailto:codingspiderfox@xxxxxxxxx] 
Sent: Friday, May 29, 2020 2:45 PM
To: ceph-users@xxxxxxx
Subject:  Re: OSD backups and recovery

Am Fr., 29. Mai 2020 um 23:32 Uhr schrieb <DHilsbos@xxxxxxxxxxxxxx>:

> Ludek;
>
> As a cluster system, Ceph isn't really intended to be backed up.  It's 
> designed to take quite a beating, and preserve your data.
>
>
But that does not save me when a crypto trojan encrypts all my data. There should always be an offline backup that can be restored in case of crypto trojan



> From a broader disaster recovery perspective, here's how I architected 
> my
> clusters:
> Our primary cluster is laid out in such a way that an entire rack can 
> fail without read / write being impacted, much less data integrity.  
> On top of that, our RadosGW was a multi-site setup which automatically 
> sends a copy of every object to a second cluster at a different location.
>
> Thus my disaster recovery looks like this:
> 1 rack or less: no user impact, rebuild rack
> 2 racks: users are unable to add objects, but existing data is safe, 
> rebuild cluster (or as below) Whole site: switch second site to master 
> and continue
>
> No backup or recovery necessary.
>
> You might look the multi-site documentation:
> https://docs.ceph.com/docs/master/radosgw/multisite/
>
> I had a long conversation with our owner on this same topic, and how 
> the organization would have to move from a "Backup & Recover" mindset 
> to a "Disaster Recovery" mindset.  It worked well for us, as we were 
> looking to move more towards Risk Analysis based approaches anyway.
>
> Thank you,
>
> Dominic L. Hilsbos, MBA
> Director – Information Technology
> Perform Air International, Inc.
> DHilsbos@xxxxxxxxxxxxxx
> www.PerformAir.com
>
>
>
> -----Original Message-----
> From: Ludek Navratil [mailto:ludek.navratil@xxxxxxxxxxx]
> Sent: Wednesday, February 5, 2020 6:57 AM
> To: ceph-users@xxxxxxx
> Subject:  OSD backups and recovery
>
> HI all,
> what is the best approach for OSD backups and recovery? We use only 
> Radosgw with S3 API and I need to backup the content of S3 buckets.
> Currently I sync s3 buckets to local filesystem and backup the content 
> using Amanda.
> I believe that there must a better way to do this but I couldn't find 
> it in docs.
>
> I know that one option is to setup an archive zone, but it requires an 
> additional ceph cluster that needs to be maintained and looked after. 
> I would rather avoid that.
>
> How can I backup an entire Ceph cluster? Or individual OSDs in the way 
> that will allow me to recover the data correctly?
>
> Many thanks,Ludek
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
> email to ceph-users-leave@xxxxxxx 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
> email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux