Hi The doc http://ceph.com/docs/next/rbd/qemu-rbd/, said: the discard operation uses the IDE driver. The virtio driver does not support. As I know, The virtio driver can improve the efficiency of kvm guest os with local disk. If virtio driver also have this advantage with ceph block disk. About the confues of increase when write to disk. Assume we didn't use discard options in kvm config. I really worry about is user can use vm which use ceph disk make ceph cluster have a lot of useless file. especially If user is intentional. For example, If user execute dd to a ceph disk a lot of times (I mean create, delete, create, delete, ...... ), this would be a problem. ------------------ lyz_pro 2014-01-11 ------------------------------------------------------------- 发件人:Bradley Kite <bradley.kite@xxxxxxxxx> 发送日期:2014-01-10 17:50 收件人:lyz_pro 抄送:ceph-users 主题:Re: A confuse of using rbd image disk with kvm Hi Ceph uses thin-provisioning so it will not allocate the full block device when you create it with qemu-img. When you write to the data it then allocates the block devices. However, you can enable TRIM/DISCARD in the VM as per the documentation here: http://ceph.com/docs/next/rbd/qemu-rbd/ - this should allow ceph to reclaim the deleted space. Regards -- Brad. On 10 January 2014 09:21, lyz_pro <lyz_pro@xxxxxxx> wrote: > Hi > I have a question: > I used qemu-img create a rbd disk. and attach it to a vm. > then I format the disk as ext3 and mount the disk in vm. > After above steps, > I dd a 1G file to the rbd disk in vm. then remove the disk, > and dd a 1G file again, then remove it. do this several times. > > I use `watch ceph -s` during dd operation and found the used space in ceph > cluster is always increase, > and not found decrease. > > The above condition make me confuse, Is there any one could kindly explan > why this happened , is there somthing I miss ? > > additional: > The qemu and libvirt version I use is : > Compiled against library: libvirt 0.10.2 > Using library: libvirt 0.10.2 > Using API: QEMU 0.10.2 > Running hypervisor: QEMU 0.12.1 > > ceph-0.72.2-0.el6.x86_64 > qemu-img-0.12.1.2-2.415.el6.3ceph.x86_64 > qemu-kvm-tools-0.12.1.2-2.415.el6.3ceph.x86_64 > > > > -------------- > lyz_pro > 2014-01-10 > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com