ideas on virDomainListBlockStats for allocation numbers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Based on Dan's recommendation [1], I'm looking at enhancing
virDomainListBlockStats to report the allocation numbers of a backing
file during a virDomainBlockCommit operation.  Getting the information
from qemu is not difficult, but the question is how should it be
represented to the end user.  See below for my ideas, and I'm open to
feedback.

[1] https://www.redhat.com/archives/libvir-list/2014-November/msg00604.html

Some background - highest allocation is mainly applicable to using a
qcow2 format on top of a raw block device (that's the only time that
libvirt reports a different number based on querying qemu, other file
formats just go off of stat information), so even setting this up to
test can be interesting.  Also, while qemu reports wr_highest_offset
during 'query-blockstats', a disk has to actually have write activity
performed during that given qemu process before the offset will be
accurate.  It took me a while to figure this out; when I set up a dummy
guest with no OS (and therefore no writes), the offset being reported
was 0 even when I had used qemu-io to poke data into the file prior to
starting qemu.  I finally figured out that metadata writes also count as
part of the highest offset visited; so using
'blockdev-snapshot-internal-sync' followed by
'blockdev-snapshot-delete-internal-sync' is sufficient to cause qemu to
write metadata and therefore reveal the highest offset.

So I set up a playground to test things, where I first created two 1G
partitions, then use this script to re-create a guest each time I want
to reset things:

======
#!/bin/sh
cd /tmp
rm -f wrapper.qcow2
virsh destroy testvm2 2>/dev/null
qemu-img create -f qcow2 /dev/sda6 $((750*1024*1024))
qemu-img create -f qcow2 /dev/sda6 $((1250*1024*1024))
virsh create /dev/stdin <<EOF
<domain type='kvm'>
 <name>testvm2</name>
 <memory unit='MiB'>256</memory>
 <vcpu>1</vcpu>
 <os>
   <type arch='x86_64'>hvm</type>
 </os>
 <devices>
   <disk type='block' device='disk'>
     <driver name='qemu' type='qcow2'/>
     <source dev='/dev/sda6'/>
     <target dev='vda' bus='virtio'/>
   </disk>
   <disk type='block' device='disk'>
     <driver name='qemu' type='qcow2'/>
     <source dev='/dev/sda7'/>
     <target dev='vdb' bus='virtio'/>
   </disk>
   <graphics type='vnc'/>
 </devices>
</domain>
EOF

virsh qemu-monitor-command testvm2 \
  '{"execute":"blockdev-snapshot-internal-sync",' \
  '"arguments":{"device":"drive-virtio-disk0", "name":"snap1"}}'
virsh qemu-monitor-command testvm2 \
  '{"execute":"blockdev-snapshot-delete-internal-sync",' \
  '"arguments":{"device":"drive-virtio-disk0", "name":"snap1"}}'

virsh domblkinfo testvm2 vda
virsh domblkinfo testvm2 vdb
========

which shows this for my starting point:
# virsh domblkinfo testvm2 vda
Capacity:       786432000
Allocation:     458752
Physical:       1073741824
# virsh domblkinfo testvm2 vdb
Capacity:       1310720000
Allocation:     0
Physical:       1073741824

After that, I can create external snapshots:
# virsh snapshot-create-as testvm2 --disk-only --no-metadata \
  --diskspec vda,file=/tmp/wrapper.qcow2 --diskspec vdb,snapshot=no

at which point dumpxml shows this subset:

    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/tmp/wrapper.qcow2'/>
      <backingStore type='block' index='1'>
        <format type='qcow2'/>
        <source dev='/dev/sda6'/>
        <backingStore/>
      </backingStore>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>


Next, I can play with blockcommit, where a qemu-monitor-command on
'query-blockstats' will show me the growing allocation when the backing
file is being written during the commit.  The other useful output is the
new virDomainListBlockStats:
# virsh domstats --block testvm2Domain: 'testvm2'
  block.count=2
  block.0.name=vda
  block.0.rd.reqs=1
  block.0.rd.bytes=512
  block.0.rd.times=31635
  block.0.wr.reqs=0
  block.0.wr.bytes=0
  block.0.wr.times=0
  block.0.fl.reqs=0
  block.0.fl.times=0
  block.0.allocation=458752
  block.0.capacity=786432000
  block.1.name=vdb
  block.1.rd.reqs=0
  block.1.rd.bytes=0
  block.1.rd.times=0
  block.1.wr.reqs=0
  block.1.wr.bytes=0
  block.1.wr.times=0
  block.1.fl.reqs=0
  block.1.fl.times=0
  block.1.allocation=0
  block.1.capacity=1310720000

The problem is that once we have a domain with more than one <disk>, and
where one or all disks have more than one <backingStore>, then how
should virDomainListBlockStats represent that?

One idea I have is to just expose a block.count equal to the total
number of devices I'm about to report on, where the array can be larger
than the number of disks, and using the name field to correlate back to
dumpxml layout:

  block.count=3
  block.0.name=vda # information on wrapper.qcow2
  ...
  block.1.name=vda[1] #  information on backingStore index 1 of vda
  block.1.rd.reqs=0   #+ that is, on /dev/sda6
  ...
  block.2.name=vdb # information on /dev/sda7
  ...

It may make things easier if I also add a block.n.path that lists the
file name of the block being described (might get tricky with
NBD/gluster/sheepdog network disks).

Also, I'm thinking of adding block.n.physical to match the older
virDomainGetBlockInfo() information.

Another possible layout is to mirror the nesting of XML backingChain.
Something like:

  block.count=2
  block.0.name=vda
  block.0.backing=1
  block.0.allocation=...  # information on /tmp/wrapper.qcow2
  ...
  block.0.0.allocation=... # information on /dev/sda6
  ...
  block.1.name=vdb
  block.1.backing=0
  block.1.allocation=... # information on /dev/sda7

But there, we run into a possible problem: virTypedParameter has a
finite length for field names, so we can only cover a finite depth of
backing chain before we run out of space and can't report on the full
chain.  Any other ideas for the best way to lay this out, and for how to
make it as easy as possible for client applications to correlate
information on allocation back to the appropriate block device in the chain?

I also wonder if adding more information to the existing --block flag
for stats is okay, or whether it is better to add yet another statistics
flag grouping that must be requested to turn on information about
backing files.  Technically, it's still block-related statistics, but as
we have already got released libvirt that still has a 1:1 block.n
mapping to <disk> elements, using a new flag would make it easier to
learn if libvirt is new enough to support information on backing chains,
all without confusing existing clients that aren't expecting backing
chain stats.  This question needs answering regardless of which layout
we choose above for representing backing chain stats.

Thoughts welcome.

-- 
Eric Blake   eblake redhat com    +1-919-301-3266
Libvirt virtualization library http://libvirt.org

Attachment: signature.asc
Description: OpenPGP digital signature

--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list

[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]