Hardware failures are just one possible cause. If you value your data you will have a backup and preferably going to some sort of removable media that can be taken offsite, like those things that everybody keeps saying are dead…..what are they called….oh yeah tapes. J A online copy of your data on some sort of large JBOD or 2nd Ceph cluster is a good idea if you need faster access, but I wouldn’t rely on it for my only backup. There are many things that can cause data loss, failing hardware is just one. As can be seen through many posts on this list, bugs in Ceph or user error is a much more common cause of data loss and triple replication won’t protect you from it. Thought should also be given to malicious actions by internal staff with grievances or external hackers (eg ransomware). In these cases even online backups like rsync…etc, might not protect you as that data can be accessed and deleted at the same time as the live data. I predict these sort of incidents will become more common in the near future. From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of ????????????, ???????? Hello! The answer is pretty much depends on your fears. If you afraid of hardware failures you could have more then standard 3 copies, configure your failure domain properly and so on. If you afraid of some big disaster that can hurt all of your hardware - you could consider making an async replica to a cluster in an another datacenter on another content. If you afraid of some kind of cluster software issues - then you can build an another cluster and use third-party tools to backup data there, but as you correctly noticed it will not be too convenient. As a common sollution I would offer you to use the same cluster for backups as well (may be just a different pool\OSD tree with less expensive drives) - in most cases it's enough.
Vladimir 2017-02-14 14:15 GMT+05:00 Götz Reinicke <goetz.reinicke@xxxxxxxxxxxxxxx>:
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com