Re: Objects not removed (completely) when removing a rbd image

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I just looked through the rbd driver of OpenStack cinder. It seems there is no additional clear_volume step implemented for rbd driver. In my case, objects of this rbd image were deleted partially, so I doubt it’s related Ceph instead of Cinder driver.

br,
Xu Yun

> 2020年1月15日 下午7:36,EDH - Manuel Rios <mriosfer@xxxxxxxxxxxxxxxx> 写道:
> 
> Hi
> 
> For huge volumes in Openstack and Ceph, setup in your cinder this param:
> 
> volume_clear_size = 50 
> 
> That will wipe only the first 50MB of the file and then ask to ceph to fully delete instead wipe all disk with zeros that sometimes in huge volumes cause timeout.
> 
> In our deploy that was the solution, Openstack Queens here
> 
> 
> -----Mensaje original-----
> De: Eugen Block <eblock@xxxxxx> 
> Enviado el: miércoles, 15 de enero de 2020 8:51
> Para: ceph-users@xxxxxxx
> Asunto:  Re: Objects not removed (completely) when removing a rbd image
> 
> Hi,
> 
> this might happen if you try to delete images/instances/volumes in openstack that are somehow linked, e.g. if there are snapshots etc. I have experienced this in Ocata, too. Deleting a base image worked but there were existing clones so basically just the openstack database was updated, but the base image still existed within ceph.
> 
> Try to figure out if that is also the case. If it's something else, check the logs in your openstack environment, maybe they reveal something. Also check the ceph logs.
> 
> Regards,
> Eugen
> 
> 
> Zitat von 徐蕴 <yunxu@xxxxxx>:
> 
>> Hello,
>> 
>> My setup is Ceph pike working with OpenStack. When I deleted an image, 
>> I found that the space was not reclaimed. I checked with rbd ls and 
>> confirmed that this image was disappeared. But when I check the 
>> objects with rados ls, most objects named rbd_data.xxx are still 
>> existed in my cluster. rbd_object_map and rbd_header were already 
>> deleted. I waited for several hours and there is no further deletion 
>> happed. Is it a known issue, or something wrong with my configuration?
>> 
>> br,
>> Xu Yun
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> 
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux