Re: Create qcow2 v3 volumes via libvirt

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 05/22/2018 01:53 PM, Paul O'Rorke wrote:
Hi Eric and list,

I had another production VM start pausing itself.  This one had been running for more than 4 years on a 60G LVM volume.  It has had the occasional snapshot during that time though all have been "removed" using the virt-manager gui so I used qemu-img as you suggested.

# qemu-img convert /dev/trk-kvm-02-vg/rt44 -O qcow2 /mnt/scratch/rt44.qcow2

To make sure I understand, /dev/trk-kvm-02-vg/rt44 was using qcow2 on top of a block device, both before and after you used qemu-img convert to compact out the wasted space? Also, do you know if you were using qcow2v2 or v3 prior to running out of space? qcow2v2 was the default in CentOS 6 for historical reasons, but qemu doesn't support efficient space reclamation on those older images the way it does on qcow2v3.

Did you save the full image anywhere, so we can do additional postmortem analysis on it? Not that I'm asking to see the image itself, in case that would give away private information; but even something like running 'qcheck' from https://github.com/jnsnow/qcheck.git might shed some insight into the storage being used by the image that reached capacity.


dd'd the qcow2 image back on to the LV after testing it boots OK directly from the image and it is in production again.

The VM itself reports ample space available:

$ df -h
Filesystem                       Size  Used Avail Use% Mounted on
udev                             3.9G     0  3.9G   0% /dev
tmpfs                            789M  8.8M  780M   2% /run
/dev/mapper/RT--vg-root           51G   21G   28G  42% /
tmpfs                            3.9G     0  3.9G   0% /dev/shm
tmpfs                            5.0M     0  5.0M   0% /run/lock
tmpfs                            3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/vda1                        472M  155M  293M  35% /boot
192.168.0.16:/volume1/fileLevel  8.1T  2.5T  5.6T  31% /mnt/nfs/fileLevel
tmpfs                            789M     0  789M   0% /run/user/1000

I would prefer to not get caught out again with this machine pausing, how can I determine how much space is being used up by 'deleted' internal snapshots?  Do you have any suggested reading on this?

The highest write offset (block.<num>.allocation in virConnectGetAllDomainStats()) should tell you how much of the underlying block device is "in use", that is, the highest offset that is being written to. It doesn't count for holes earlier in the image, especially where space could be reclaimed by reusing those holes, but does give a good indication when you might need to resize a storage volume to account for growing amounts of metadata that aren't being cleared.

And it may be that qemu still needs some patches to more fully trim and reuse the space previously occupied by a deleted internal snapshot.


If I extend the LVM volume but not the guest file system will snapshots be "at the end" of the LV and "outside" the guest file system?

qcow2 is a mapped file format. Resizing the LVM volume does NOT change the amount of disk space seen by the guest. Clusters may appear in a different order in the host's raw storage than the order in which they are visited in the guest ('qemu-img map' can show you the mapping), but the guest does not care because it is always presented a logical view of a linear array of storage, regardless of how clusters are fragmented across the host device. Right now, qcow2 does not have any way to constrain the clusters used by guest data to appear at any particular offset (although there has been talk of adding a new mode to qemu to operate on a fully-preallocated image where all guest clusters except maybe the first occur in linear order at the beginning of the file, and all qcow2 metadata except for the leading header be placed at the end of the file at offsets higher than the guest clusters - in such a layout, as long as you don't use internal snapshots, then all further qcow2 metadata writes would be beyond the linear region reserved for guest-visible clusters).


If I were to expand the guest's ext4 file system I would want to do it unmounted and from a live CD but I'm having a heck of a time getting my live distro to use the virtio disk drivers.  Any advice there?

virt-resize from libguestfs-tools is your friend!

--
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org

_______________________________________________
libvirt-users mailing list
libvirt-users@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvirt-users




[Index of Archives]     [Virt Tools]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [KDE Users]

  Powered by Linux