Soft removal of RBD images

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Since Ceph Hammer we can protect pools from being removed from the
cluster, but we can't protect against this:

$ rbd ls|xargs -n 1 rbd rm

That would remove all not opened RBD images from the cluster.

This requires direct access to your Ceph cluster and keys with the
proper permission, but it could also be that somebody gains access to a
OpenStack or CloudStack API with the proper credentials and issues a
removal for all volumes.

*Stack will then remove the RBD image and you just lost the data or you
face a very long restore procedure.

What about a soft-delete for RBD images? I don't know how it should
work, since if you gain native RADOS access you can still remove all
objects:

$ rados -p rbd ls|xargs -n 1 rados -p rbd rm

I don't have a design idea yet, but it's something that came to mind.
I'd personally like a double-double backup before Ceph decides to remove
the data.

But for example:

When a RBD image is removed we set the "removed" bit in the RBD header,
but every RADOS object also gets a "removed" bit set.

After a X period the OSD which is primary for a PG starts to remove all
objects which have that bit set.

In the meantime you can still get back the RBD image by reverting it in
a special way. With a special cephx capability for example.

This goes a bit in the direction of soft pool-removals as well, it might
be combined.

Comments?

-- 
Wido den Hollander
42on B.V.
Ceph trainer and consultant

Phone: +31 (0)20 700 9902
Skype: contact42on
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux