On 08/26/2009 08:51 PM, Andrew Theurer wrote:
The stats show 'largepage = 12'. Something's wrong. There's a commit
(7736d680) that's supposed to fix largepage support for kvm-87, maybe
it's incomplete.
How strange. /proc/meminfo showed that almost all of the pages were
used:
HugePages_Total: 12556
HugePages_Free: 220
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
I just assumed they were used properly. Maybe not.
My mistake. The kvm_stat numbers you provided were rate (per second),
so it just means it's still faulting in pages at a rate of 1 per guest
per second.
I/O on the host was not what I would call very high: outbound network
averaged at 163 Mbit/s inbound was 8 Mbit/s, while disk read ops was
243/sec and write ops was 561/sec
What was the disk bandwidth used? Presumably, direct access to the
volume with cache=off?
2.4 MB/sec write, 0.6MB/sec read, cache=none
The VMs' boot disks are IDE, but apps use their second disk which is
virtio.
Chickenfeed.
Do the network stats include interguest traffic? I presume *all* of the
traffic was interguest.
Sar network data:
IFACE rxpck/s txpck/s rxkB/s txkB/s
Average: lo 0.00 0.00 0.00 0.00
Average: usb0 0.39 0.19 0.02 0.01
Average: eth0 2968.83 5093.02 340.13 6966.64
Average: eth1 2992.92 5124.08 342.75 7008.53
Average: eth2 1455.53 2500.63 167.45 3421.64
Average: eth3 1500.59 2574.36 171.98 3524.82
Average: br0 2.41 0.95 0.32 0.13
Average: br1 1.52 0.00 0.20 0.00
Average: br2 1.52 0.00 0.20 0.00
Average: br3 1.52 0.00 0.20 0.00
Average: br4 0.00 0.00 0.00 0.00
Average: tap3 669.38 708.07 290.89 140.81
Average: tap109 678.53 723.58 294.07 143.31
Average: tap215 673.20 711.47 291.99 141.78
Average: tap321 675.26 719.33 293.01 142.37
Average: tap27 679.23 729.90 293.86 143.60
Average: tap133 680.17 734.08 294.33 143.85
Average: tap2 1002.24 2214.19 3458.54 457.95
Average: tap108 1021.85 2246.53 3491.02 463.48
Average: tap214 1002.81 2195.22 3411.80 457.28
Average: tap320 1017.43 2241.49 3508.20 462.54
Average: tap26 1028.52 2237.98 3483.84 462.53
Average: tap132 1034.05 2240.89 3493.37 463.32
tap0-99 go to eth0, 100-199 to eth1, 200-299 to eth2, 300-399 to eth4.
There is some inter-guest traffic between VM pairs (like taps 2&3,
108&119, etc.) but not that significant.
Oh, so there are external load generators involved.
Can you run this on kvm.git master, with
CONFIG_TRACEPOINTS=y
CONFIG_TRACER_MAX_TRACE=y
CONFIG_RING_BUFFER=y
CONFIG_FTRACE_NMI_ENTER=y
CONFIG_EVENT_TRACING=y
CONFIG_TRACING=y
CONFIG_GENERIC_TRACER=y
CONFIG_TRACING_SUPPORT=y
CONFIG_FTRACE=y
CONFIG_DYNAMIC_FTRACE=y
(some may be overkill)
and, while the test is running, do:
cd /sys/kernel/debug/tracing
echo kvm > set_event
(wait two seconds)
cat trace > /tmp/trace
and send me /tmp/trace.bz2? should be quite big.
--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html