Re: kvm, drbd, elevator, rotational - quite an interesting co-operation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Javier Guerra wrote:
On Thu, Jul 2, 2009 at 2:55 PM, Michael Tokarev<mjt@xxxxxxxxxx> wrote:
kvm: i/o threads - should there be a way to control the amount of
 threads?  With default workload generated by drbd on secondary
 node having less thread makes more sense.

+1 on this.   it seems reasonable to have one thread per device, or am
i wrong?

Definitely not one thread per device.  This is because even simple
hard drives nowadays have quite advanced NCQ/TCQ implementations,
when it is better to keep the drive queue deeper than 1, to let
the drive to re-order requests as it see fit, to optimize head
movements etc.

For larger devices (which are arrays of disks with maybe large
battery-backed write caches) more threads makes even more sense.

Also, with this queue/reordering in mind, think about how it should
look like from host vs guest perspective: ideally kvm should be
able to provide a queue of depth >1 to the guest, and that a
guest is "multi-threaded" (multi-processes really) by its own.

To be fair, I can't construct an example when deeper queue may
be bad (not counting bad NCQ implementations).

it also bothers me because when i have a couple of moderately
disk-heavy VMs, the load average numbers skyrockets.  that's because
each blocked thread counts as 1 on this figure, even if they're all
waiting on the same device.

And how having large LA is bad?  I mean, LA by itself is not an
indicator of bad or good performance, don't you think?

kvm: it has been said that using noop elevator on guest makes sense
 since host does its own elevator/reordering.  But this example
 shows "nicely" that this isn't always the case.  I wonder how
 "general" this example is.  Will try to measure further.

on my own (quick) tests, changing the elevator on the guest has very
little effect on performance; but does affect the host CPU
utilization. using drbd on the guest while testing with bonnie++
increased host CPU by around 20% for each VM

Increased compared what with what?  Also, which virtual disk format
did you use?

By doing just an strace of kvm process here it's trivial to see the
difference: switching from non-rotational to rotational makes kvm
to start writing in large chunks instead of 1024-sized blocks.
Which means, at least, much less context switches, which should
improve performance.  Yes it increases CPU usage somewhat (maybe
around 20%), but it also increases I/O speed here quite significantly.
Your test shows no increase in speed.  Which suggests we're doing
something differently.

/mjt
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux