Re: possibility to delete all zeros

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The following advice assumes these images don't have associated snapshots (since keeping the non-sparse snapshots will keep utilizing the storage space):

Depending on how you have your images set up, you could snapshot and clone the images, flatten the newly created clone, and delete the original image -- this will result in a new, sparse image.  You will have to power the guest VM down for a short period of time (enough time to rename the original image to something temporary, snapshot it, and clone it to the original image name).

If your images don't support cloning, probably the next best approach would just be to copy the original images to a new image -- which again will create a new, sparse image.  Your VM downtime will be much larger with this approach.

If you can hold-out, in a future version of Ceph, there will be a new compare extent operation, which would allow you to safely combine a "compare the full object extent to zero" operation with a remove operation for each RBD image object.  This would allow you to perform this type of cleanup while guests are still running since you would avoid the potential for a data race between the guest and your cleanup operation.

--

Jason

----- Original Message ----- 

> From: "Stefan Priebe - Profihost AG" <s.priebe@xxxxxxxxxxxx>
> To: ceph-users@xxxxxxxxxxxxxx
> Sent: Friday, October 2, 2015 8:16:52 AM
> Subject:  possibility to delete all zeros

> Hi,

> we accidentally added zeros to all our rbd images. So all images are no
> longer thin provisioned. As we do not have access to the qemu guests running
> those images. Is there any other options to trim them again?

> Greets,
> Stefan

> Excuse my typo s ent from my mobile phone.

> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux