Effects of restoring a cluster's mon from an older backup

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm experimenting with single-host Ceph use cases, where HA is not
important but data durability is.

How does a Ceph cluster react to its (sole) mon being rolled back to an
earlier state? The idea here is that the mon storage may not be
redundant but would be (atomically, e.g. lvm snapshot and dump) backed
up, say, daily. If the cluster goes down and then is brought back up
with a mon backup that is several days to hours old, while the OSDs are
up to date, what are the potential consequences?

Of course I expect maintenance operations to be affected (obviously any
OSDs added/removed would likely get confused). But what about regular
operation? Things like snapshots and snapshot ranges. Is this likely to
cause data loss, or would the OSDs and clients largely not be affected
as long as the cluster config has not changed?

There's a way of rebuilding the monmap from OSD data:

http://docs.ceph.com/docs/mimic/rados/troubleshooting/troubleshooting-mon/#recovery-using-osds

Would this be preferable to just restoring the mon from a backup? What
about the MDS map?

-- 
Hector Martin (hector@xxxxxxxxxxxxxx)
Public Key: https://mrcn.st/pub
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux