The RAW file will appear to be the exact image size but the filesystem will know about the holes in the image and it will be sparsely allocated on disk. For example: # dd if=/dev/zero of=sparse-file bs=1 count=1 seek=2GiB # ll sparse-file -rw-rw-r--. 1 jdillaman jdillaman 2147483649 Jul 13 09:20 sparse-file # du -sh sparse-file 4.0K sparse-file Now, running qemu-img to copy the image into the backing RBD pool: # qemu-img convert -f raw -O raw ~/sparse-file rbd:rbd/sparse-file # rbd disk-usage sparse-file NAME PROVISIONED USED sparse-file 2048M 0 On Wed, Jul 13, 2016 at 3:31 AM, Fran Barrera <franbarrera6@xxxxxxxxx> wrote: > Yes, but is the same problem isn't? The image will be too large because the > format is raw. > > Thanks. > > 2016-07-13 9:24 GMT+02:00 Kees Meijs <kees@xxxxxxxx>: >> >> Hi Fran, >> >> Fortunately, qemu-img(1) is able to directly utilise RBD (supporting >> sparse block devices)! >> >> Please refer to http://docs.ceph.com/docs/hammer/rbd/qemu-rbd/ for >> examples. >> >> Cheers, >> Kees >> >> On 13-07-16 09:18, Fran Barrera wrote: >> > Can you explain how you do this procedure? I have the same problem >> > with the large images and snapshots. >> > >> > This is what I do: >> > >> > # qemu-img convert -f qcow2 -O raw image.qcow2 image.img >> > # openstack image create image.img >> > >> > But the image.img is too large. >> >> _______________________________________________ >> ceph-users mailing list >> ceph-users@xxxxxxxxxxxxxx >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Jason _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com