$ rbd diff rbd/myimage-1 | awk '{ SUM += $2 } END { print SUM/1024/1024 " MB" }' -- Regards, Sébastien Han. > On 04 Nov 2014, at 16:57, Daniel Schwager <Daniel.Schwager@xxxxxxxx> wrote: > > Hi, > > is there a way to query the used space of a RBD image created with format 2 (used for kvm)? > Also, if I create a linked clone base on this image, how do I get the additional, individual used space of this clone? > > In zfs, I can query these kind of information by calling "zfs info .." (2). > "rbd info" (1) shows not that much information about the image. > > best regards > Danny > > --- > > (1) Output of "zfs info" from a Solaris system > > root@storage19:~# zfs get all pool5/w2k8.dsk > NAME PROPERTY VALUE SOURCE > pool5/w2k8.dsk available 75,3G - > pool5/w2k8.dsk checksum on default > pool5/w2k8.dsk compression off default > pool5/w2k8.dsk compressratio 1.00x - > pool5/w2k8.dsk copies 1 default > pool5/w2k8.dsk creation Di. Mai 10 14:44 2011 - > pool5/w2k8.dsk dedup off default > pool5/w2k8.dsk encryption off - > pool5/w2k8.dsk keychangedate - default > pool5/w2k8.dsk keysource none default > pool5/w2k8.dsk keystatus none - > pool5/w2k8.dsk logbias latency default > pool5/w2k8.dsk primarycache all default > pool5/w2k8.dsk readonly off default > pool5/w2k8.dsk referenced 17,4G - > pool5/w2k8.dsk refreservation none default > pool5/w2k8.dsk rekeydate - default > pool5/w2k8.dsk reservation none default > pool5/w2k8.dsk secondarycache all default > pool5/w2k8.dsk sync standard default > pool5/w2k8.dsk type volume - > pool5/w2k8.dsk used 18,5G - > pool5/w2k8.dsk usedbychildren 0 - > pool5/w2k8.dsk usedbydataset 17,4G - > pool5/w2k8.dsk usedbyrefreservation 0 - > pool5/w2k8.dsk usedbysnapshots 1,15G - > pool5/w2k8.dsk volblocksize 8K - > pool5/w2k8.dsk volsize 25G local > pool5/w2k8.dsk zoned off default > > > (2) Output of "rbd info" > > [root@ceph-admin2 ~]# rbd info rbd/myimage-1 > rbd image 'myimage-1': > size 50000 MB in 12500 objects > order 22 (4096 kB objects) > block_name_prefix: rbd_data.11e82ae8944a > format: 2 > features: layering > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com