Re: Ceph - reclaim free space - aka trimrbd image

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'll refer you to the man page for blkdiscard [1]. Since it operates
on the block device, it doesn't know about filesystem holes and
instead will discard all data specified (i.e. it will delete all your
data).

[1] http://man7.org/linux/man-pages/man8/blkdiscard.8.html

On Thu, Mar 2, 2017 at 9:54 AM, Massimiliano Cuttini <max@xxxxxxxxxxxxx> wrote:
>
>
> Il 02/03/2017 14:11, Jason Dillaman ha scritto:
>>
>> On Thu, Mar 2, 2017 at 8:09 AM, Massimiliano Cuttini <max@xxxxxxxxxxxxx>
>> wrote:
>>>
>>> Ok,
>>>
>>> then, if the command comes from the hypervisor that hold the image is it
>>> safe?
>>
>> No, it needs to be issued from the guest VM -- not the hypervisor that
>> is running the guest VM. The reason is that it's a black box to the
>> hypervisor and it won't know what sectors can be safely discarded.
>
> This is true if you talk about the filesystem.
> So the command
>
>         fstrim
>
> would be the case for sure.
> But if we talk about the block device.
> The command
>
>         blkdiscard
>
> could not run on a VM which see images as localdisks without any thin
> provisioning.
> This command should be casted by the Hypervisor not the guest.
>
> ... or not?
>
>
>>> But if the guest VM on the same Hypervisor try to using the image, what
>>> happen?
>>
>> If you trim from outside the guest, I would expect you to potentially
>> corrupt the image (if the fstrim tool doesn't stop you first since the
>> filesystem isn't mounted).
>
>
> Ok make it sense on fstrim, but with blkdiscard?
>
>
>>> Are these safe tools? (aka: safely exit with error instead of try the
>>> command and ruin the image?).
>>> Should I consider a snapshot before go?
>>>
>> As I mentioned, the only safe way to proceed would be to run the trim
>> from within the guest VM or wait until Ceph adds the rbd CLI tooling
>> to safely sparsify an image.



-- 
Jason
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux