Re: Ceph + Libvirt + QEMU-KVM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



In order to be fast, the 'rbd du' command counts the existence of any data object as fully used on disk.  Therefore, if you have a 4MB object size, writing 1 byte to two objects would result in 'rbd du' reporting 8MB in-use.  You can simulate the same result via 'rbd diff':

rbd diff --whole-object <IMAGE-SPEC> | awk '{ SUM += $2 } END { print SUM/1024/1024 " MB" }'  (should be similar to rbd du output -- let me know if it's not)

-- vs --

rbd diff <IMAGE-SPEC> | awk '{ SUM += $2 } END { print SUM/1024/1024 " MB" }' (removes sparse space)

Another possibility is if you wrote a bunch of zeros to the RBD image, it will no longer be sparsely allocated -- but converting to qcow2 will re-sparsify the resulting image.

-- 

Jason Dillaman 


----- Original Message ----- 

> From: "Bill WONG" <wongahshuen@xxxxxxxxx>
> To: "Jason Dillaman" <dillaman@xxxxxxxxxx>
> Cc: "Mihai Gheorghe" <mcapsali@xxxxxxxxx>, "ceph-users"
> <ceph-users@xxxxxxxxxxxxxx>
> Sent: Thursday, January 28, 2016 12:34:36 PM
> Subject: Re:  Ceph + Libvirt + QEMU-KVM

> hi jason,

> i got it. thank you!
> i have a questions about the thin provisioning, suppose Ceph should be using
> thin provisioning by default, but how come the rbd du command show the disk
> usage is almost 19G for a 20G allocation, as when i use the expert to a .img
> from ceph to external local disk, it's about 4G image only, and the OS
> actually is about 1-2G if provisioned via local disk by using qcow2 format.
> any comments or ideas?

> thank you!

> On Thu, Jan 28, 2016 at 11:51 PM, Jason Dillaman < dillaman@xxxxxxxxxx >
> wrote:

> > The way to interpret that output is that the HEAD revision of "CentOS3" has
> > about a 700MB delta from the previous snapshot (i.e. 700MB + 18G are used
> > by
> > this image and its snapshot). There probably should be an option in the rbd
> > CLI to generate the full usage for a particular image and all of its
> > snapshots. Right now the only way to do that is to perform a du on the
> > whole
> > pool.
> 

> > --
> 

> > Jason Dillaman
> 

> > ----- Original Message -----
> 

> > > From: "Bill WONG" < wongahshuen@xxxxxxxxx >
> 
> > > To: "Mihai Gheorghe" < mcapsali@xxxxxxxxx >
> 
> > > Cc: "ceph-users" < ceph-users@xxxxxxxxxxxxxx >
> 
> > > Sent: Thursday, January 28, 2016 6:17:05 AM
> 
> > > Subject: Re:  Ceph + Libvirt + QEMU-KVM
> 

> > > Hi Mihai,
> 

> > > it looks rather strange in ceph snapshot, the size of snapshot is bigger
> > > than
> 
> > > the size of original images..
> 
> > > Original Image actual used size is 684M w/ provisioned 20G
> 
> > > but the snapshot actual used size is ~18G w/ provisioned 20G
> 

> > > any ideas?
> 

> > > ==
> 
> > > [root@compute2 ~]# rbd du CentOS3 -p storage1
> 
> > > warning: fast-diff map is not enabled for CentOS3. operation may be slow.
> 
> > > NAME PROVISIONED USED
> 
> > > CentOS3 20480M 684M
> 

> > > [root@compute2 ~]# rbd du CentOS3@snap1 -p storage1
> 
> > > warning: fast-diff map is not enabled for CentOS3. operation may be slow.
> 
> > > NAME PROVISIONED USED
> 
> > > CentOS3@snap1 20480M 18124M
> 
> > > ==
> 
> > > qemu-img info rbd:storage1/CentOS3
> 
> > > image: rbd:storage1/CentOS3
> 
> > > file format: raw
> 
> > > virtual size: 20G (21474836480 bytes)
> 
> > > disk size: unavailable
> 
> > > cluster_size: 4194304
> 
> > > Snapshot list:
> 
> > > ID TAG VM SIZE DATE VM CLOCK
> 
> > > snap1 snap1 20G 1970-01-01 08:00:00 00:00:00.000
> 

> > > On Thu, Jan 28, 2016 at 7:09 PM, Mihai Gheorghe < mcapsali@xxxxxxxxx >
> > > wrote:
> 

> > > > As far as i know, snapshotting with qemu will download a copy of the
> > > > image
> 
> > > > on
> 
> > > > local storage and then upload it into ceph. At least this is the
> > > > default
> 
> > > > behaviour when taking a snapshot in openstack of a running instance. I
> 
> > > > don't
> 
> > > > see why it would be any different with qemu-kvm. You must use the rbd
> > > > snap
> 
> > > > feature to make a copy on write clone of the image.
> 
> > >
> 
> > > > On 28 Jan 2016 12:59, "Bill WONG" < wongahshuen@xxxxxxxxx > wrote:
> 
> > >
> 

> > > > > Hi Simon,
> 
> > > >
> 
> > >
> 

> > > > > how you manage to preform snapshot with the raw format in qemu-kvm
> > > > > VMs?
> 
> > > >
> 
> > >
> 
> > > > > and i found some issues with libvirt virsh commands with ceph:
> 
> > > >
> 
> > >
> 
> > > > > --
> 
> > > >
> 
> > >
> 
> > > > > 1) create storage pool with ceph via virsh
> 
> > > >
> 
> > >
> 
> > > > > 2) create a vol via virsh - virsh vol-create-as rbdpool VM1 100G
> 
> > > >
> 
> > >
> 

> > > > > problem is here.. if we directly create vol via qemu-img create -f
> > > > > rbd
> 
> > > > > rbd:rbdpool/VM1 100G, then virsh is unable to find the vol. - virsh
> 
> > > > > vol-list
> 
> > > > > rbdpool command is unable to list the vol, it looks such commands -
> > > > > rbd,
> 
> > > > > virsh and qemu-img creating images are not synced with each other...
> 
> > > >
> 
> > >
> 

> > > > > cloud you please let me know how you use the ceph as backend storage
> > > > > of
> 
> > > > > qemu-kvm, as if i google it, most of the ceph application is used for
> 
> > > > > OpenStack, but not simply pure qemu-kvm. as if setting up Glnace and
> 
> > > > > Cinder
> 
> > > > > is troublesome...
> 
> > > >
> 
> > >
> 

> > > > > thank you!
> 
> > > >
> 
> > >
> 

> > > > > On Thu, Jan 28, 2016 at 5:23 PM, Simon Ironside <
> > > > > sironside@xxxxxxxxxxxxx
> 
> > > > > >
> 
> > > > > wrote:
> 
> > > >
> 
> > >
> 

> > > > > > On 28/01/16 08:30, Bill WONG wrote:
> 
> > > > >
> 
> > > >
> 
> > >
> 

> > > > > > > without having qcow2, the qemu-kvm cannot make snapshot and other
> 
> > > > > >
> 
> > > > >
> 
> > > >
> 
> > >
> 
> > > > > > > features.... anyone have ideas or experiences on this?
> 
> > > > > >
> 
> > > > >
> 
> > > >
> 
> > >
> 
> > > > > > > thank you!
> 
> > > > > >
> 
> > > > >
> 
> > > >
> 
> > >
> 

> > > > > > I'm using raw too and create snapshots using "rbd snap create"
> 
> > > > >
> 
> > > >
> 
> > >
> 

> > > > > > Cheers,
> 
> > > > >
> 
> > > >
> 
> > >
> 
> > > > > > Simon
> 
> > > > >
> 
> > > >
> 
> > >
> 

> > > > > _______________________________________________
> 
> > > >
> 
> > >
> 
> > > > > ceph-users mailing list
> 
> > > >
> 
> > >
> 
> > > > > ceph-users@xxxxxxxxxxxxxx
> 
> > > >
> 
> > >
> 
> > > > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> > > >
> 
> > >
> 

> > > _______________________________________________
> 
> > > ceph-users mailing list
> 
> > > ceph-users@xxxxxxxxxxxxxx
> 
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux