Re: backup ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 09/19/2018 06:26 PM, ST Wong (ITSC) wrote:
> Hi,  thanks for your help.
> 
>> Snapshots are exported remotely, thus they are really backups 
>> One or more snapshots are kept on the live cluster, for faster recovery: if a user broke his disk, you can restore it really fast
> -> Backups can be inspected on the backup cluster
> 
> For "Snapshots are exported remotely",  means using "rbd export-diff ..." on live cluster, and "rbd import-diff ..." on remote cluster, and we've to prepare script/cronjobs to do that regularly and automatically?  

This is what backurne does
It is basically an orchestrator around those command, with logic for
retentions policies, optional proxmox integration and other duck tapes :)



> 
> Thanks again.
> /st wong
> -----Original Message-----
> From: ceph@xxxxxxxxxxxxxx <ceph@xxxxxxxxxxxxxx> 
> Sent: Wednesday, September 19, 2018 4:16 PM
> To: ST Wong (ITSC) <ST@xxxxxxxxxxxxxxxx>
> Cc: ceph-users@xxxxxxxxxxxxxx
> Subject: Re:  backup ceph
> 
> For cephfs & rgw, it all depends on your needs, as with rbd You may want to trust blindly Ceph Or you may backup all your data, just in case (better safe than sorry, as he said)
> 
> To my knowledge, there is no (or few) impact of keeping a large number of snapshot on a cluster
> 
> With rbd, you can indeed "map" a rbd volume (or snapshot): this will get you a block device, those fs can be mounted freely:
> root@backup1:~# rbd map 'my-image' --snap 'super-snapshot'
> /dev/rbd1
> root@backup1:~# mkdir /tmp/snapshot
> root@backup1:~# mount /dev/rbd1 /tmp/snapshot # here, you can access your file root@backup1:~# umount /dev/rbd1 root@backup1:~# rbd unmap 'my-image' --snap 'super-snapshot'
> 
> (note that this works because the filesystem directly use the block
> device: there is no partition or so. If there is, you must use kpartx between the 'map' and the 'mount', to map partitions too)
> 
> FYI, at job, we are using this tool¹ to backup our Proxmox VMs:
> https://github.com/JackSlateur/backurne
> 
> -> Snapshots are exported remotely, thus they are really backups One or 
> -> more snapshots are kept on the live cluster, for faster
> recovery: if a user broke his disk, you can restore it really fast
> -> Backups can be inspected on the backup cluster
> 
> Using rbd, you can also do a "duplicate-and-restore" kind of stuff Let say, for instance, that you have a VM, with a single disk The user remove a lot of files by mistake, and want them back But he does not want to fully restore the disk, because some changes must be kept And, even more, he does not know exactly which files have been removed In such scenario, you can add a new disk to that VM, those disk is the backup of the first disk. You can then mount that disk to, say, /backup, and allow the user to inspect it freely (just for you to understand what can be done using rbd)
> 
> Regards,
> 
> [¹] I made dis
> 
> On 09/19/2018 03:40 AM, ST Wong (ITSC) wrote:
>> Hi,
>>
>> Thanks for your help.
>>
>>> I assume that you are speaking of rbd only
>> Yes, as we just started studying Ceph, we only aware of backup of RBD.   Will there be other areas that need backup?   Sorry for my ignorance.
>>
>>> Taking snapshot of rbd volumes and keeping all of them on the cluster 
>>> is fine However, this is no backup A snapshot is only a backup if it 
>>> is exported off-site
>> Will this scheme (e.g. keeping 30 daily snapshots) impact performance?  
>> Besides, can we somehow "mount" snapshot of nth day to get the backup of particular file ?  sorry that we're still based on traditional SAN snapshot concepts.
>>
>> Sorry to bother, and thanks a lot.
>>
>> Rgds,
>> /st wong
>>
>> -----Original Message-----
>> From: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> On Behalf Of 
>> ceph@xxxxxxxxxxxxxx
>> Sent: Tuesday, September 18, 2018 8:04 PM
>> To: ceph-users@xxxxxxxxxxxxxx
>> Subject: Re:  backup ceph
>>
>> Hi,
>>
>> I assume that you are speaking of rbd only
>>
>> Taking snapshot of rbd volumes and keeping all of them on the cluster 
>> is fine However, this is no backup A snapshot is only a backup if it 
>> is exported off-site
>>
>> On 09/18/2018 11:54 AM, ST Wong (ITSC) wrote:
>>> Hi,
>>>
>>> We're newbie to Ceph.  Besides using incremental snapshots with RDB to backup data on one Ceph cluster to another running Ceph cluster, or using backup tools like backy2, will there be any recommended way to backup Ceph data  ?   Someone here suggested taking snapshot of RDB daily and keeps 30 days to replace backup.  I wonder if this is practical and if performance will be impact...
>>>
>>> Thanks a lot.
>>> Regards
>>> /st wong
>>>
>>>
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux