Re: backup ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Am 19.09.18 um 03:24 schrieb ST Wong (ITSC):
> Hi,
> 
> Thanks for your information.
> May I know more about the backup destination to use?  As the size of the cluster will be a bit large (~70TB to start with), we're looking for some efficient method to do that backup.   Seems RBD mirroring or incremental snapshot s with RBD (https://ceph.com/geen-categorie/incremental-snapshots-with-rbd/) are some ways to go, but requires another running Ceph cluster.  Is my understanding correct?    Thanks.

For the moment, we use Benji to backup to a classic RAID 6. With Benji, only the changed chunks are backed up, and it learns that by asking Ceph for a diff of the RBD snapshots. 
So that's really fast after the first backup, and especially if you do trimming (e.g. via guest agent if you run VMs) of the RBD volumes before backing them up. 
The same is true for Backy2, but it does not support compression (which really helps by several factors(!) in saving I/O and with zstd it does not use much CPU). 

For RBD mirroring, you do indeed need another running Ceph Cluster, but we plan to use that in the long run (on separate hardware of course). 

> Btw, is this one (https://benji-backup.me/) Benji you'r referring to ?  Thanks a lot.

Exactly :-). 

Cheers,
	Oliver

> 
> 
> 
> Cheers,
> /ST Wong
> 
> 
> 
> -----Original Message-----
> From: Oliver Freyermuth <freyermuth@xxxxxxxxxxxxxxxxxx> 
> Sent: Tuesday, September 18, 2018 6:09 PM
> To: ST Wong (ITSC) <ST@xxxxxxxxxxxxxxxx>
> Cc: Peter Wienemann <wienemann@xxxxxxxxxxxxxxxxxx>
> Subject: Re:  backup ceph
> 
> Hi,
> 
> we're also just starting to collect experiences, so we have nothing to share (yet). However, we are evaluating using Benji (a well-maintained fork of Backy2 which can also compress) in addition, trimming and fsfreezing the VM disks shortly before,
> and additionally keeping a few daily and weekly snapshots. 
> We may add RBD mirroring to a backup system in the future. 
> 
> Since our I/O requirements are not too high, I guess we will be fine either way, but any shared experience is very welcome. 
> 
> Cheers,
> 	Oliver
> 
> Am 18.09.18 um 11:54 schrieb ST Wong (ITSC):
>> Hi,
>>
>>  
>>
>> We're newbie to Ceph.  Besides using incremental snapshots with RDB to backup data on one Ceph cluster to another running Ceph cluster, or using backup tools like backy2, will there be any recommended way to backup Ceph data  ?   Someone here suggested taking snapshot of RDB daily and keeps 30 days to replace backup.  I wonder if this is practical and if performance will be impact.
>>
>>  
>>
>> Thanks a lot.
>>
>> Regards
>>
>> /st wong
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> 

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux