Re: Soft removal of RBD images

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Nov 6, 2015 at 2:03 AM, Wido den Hollander <wido@xxxxxxxx> wrote:
> Hi,
>
> Since Ceph Hammer we can protect pools from being removed from the
> cluster, but we can't protect against this:
>
> $ rbd ls|xargs -n 1 rbd rm
>
> That would remove all not opened RBD images from the cluster.
>
> This requires direct access to your Ceph cluster and keys with the
> proper permission, but it could also be that somebody gains access to a
> OpenStack or CloudStack API with the proper credentials and issues a
> removal for all volumes.
>
> *Stack will then remove the RBD image and you just lost the data or you
> face a very long restore procedure.
>
> What about a soft-delete for RBD images? I don't know how it should
> work, since if you gain native RADOS access you can still remove all
> objects:
>
> $ rados -p rbd ls|xargs -n 1 rados -p rbd rm
>
> I don't have a design idea yet, but it's something that came to mind.
> I'd personally like a double-double backup before Ceph decides to remove
> the data.
>
> But for example:
>
> When a RBD image is removed we set the "removed" bit in the RBD header,
> but every RADOS object also gets a "removed" bit set.
>
> After a X period the OSD which is primary for a PG starts to remove all
> objects which have that bit set.
>
> In the meantime you can still get back the RBD image by reverting it in
> a special way. With a special cephx capability for example.
>
> This goes a bit in the direction of soft pool-removals as well, it might
> be combined.
>
> Comments?

Besides the work of implementing lazy object deletes, I'm not sure
it's a good idea — when somebody's cluster fills up (and there's
always somebody!) we need a way to do deletes, and for that data to go
away immediately. We have enough trouble with people testing cache
pools and finding out there isn't instant deletion of the underlying
data. ;)
-Greg
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux