RE: KVM: kvm_set_slave_cpu: Invalid argument when trying direct interrupt delivery

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Tomoki,

I offline the cpu2 and cpu3 on my machine and continue to try your patch.  I run the vm without pass-through device for I only want to know the interrupt latency improvement.(Am I right?)

my qemu parameter:
./x86_64-softmmu/qemu-system-x86_64 -enable-kvm -m 1024 -cpu qemu64,+x2apic -no-kvm-pit -serial pty -nographic -drive file=/mnt/sdb/vms/testfc/testfc.qcow2,if=virtio,index=0,format=qcow2 -spice port=12000,addr=186.100.8.171,disable-ticketing,plaintext-channel=main,plaintext-channel=playback,plaintext-channel=record,image-compression=auto_glz -no-kvm-pit

cyclictest:
   cyclictest -m -p 99 -n -l 100000 -h 3000 -q  


but I got very bad result:
  avg lantency: 20000+ us
  max lantency: 50000+ us

and got

Message from syslogd@kvmsteven at Apr  7 05:43:30 ...
 kernel:[ 2201.151817] BUG: soft lockup - CPU#18 stuck for 22s! [qemu-system-x86:2365]

my setup:
host kernel: 3.6.0-rc4+ and your patches
guest kernel: 3.6.11.1-rt32
qemu: qemu-kvm-1.0 with your patch

BTW, I am sure that my rt-kernel works well, which got 12us max latency as a host OS.

Could you please provoide me more detail about your benchmark so I could reproduce your benchmark result? 

Thanks,
Steven

________________________________________
From: Tomoki Sekiyama [tomoki.sekiyama.qu@xxxxxxxxxxx]
Sent: Wednesday, April 03, 2013 10:02
To: Yangminqiang
Cc: tomoki.sekiyama@xxxxxxx; kvm@xxxxxxxxxxxxxxx
Subject: Re: KVM: kvm_set_slave_cpu: Invalid argument when trying direct interrupt delivery

Hi,

Thank you for testing the patch.

Yangminqiang <yangminqiang@xxxxxxxxxx> wrote:
> Hi Tomoki
>
> I tried your smart patch "cpu isolation and direct interrupt delivery",
>       http://article.gmane.org/gmane.linux.kernel/1353803
>
> got  output when I run qemu
>       kvm_set_slave_cpu: Invalid argument
>
> So I wonder
> * Did I  misuse your patches?
> * How is the offlined CPU assigned? or the Guest OS will automaticly detect
> and use it?

Currently it is hard-coded in the patch for qemu-kvm just for testing:

diff -Narup a/qemu-kvm-1.0/qemu-kvm-x86.c b/qemu-kvm-1.0/qemu-kvm-x86.c
--- a/qemu-kvm-1.0/qemu-kvm-x86.c       2011-12-04 19:38:06.000000000 +0900
+++ b/qemu-kvm-1.0/qemu-kvm-x86.c       2012-09-06 20:19:44.828163734 +0900
@@ -139,12 +139,28 @@ static int kvm_enable_tpr_access_reporti
     return kvm_vcpu_ioctl(env, KVM_TPR_ACCESS_REPORTING, &tac);
 }

+static int kvm_set_slave_cpu(CPUState *env)
+{
+    int r, slave = env->cpu_index == 0 ? 2 : env->cpu_index == 1 ? 3 : -1;

`slave' is the offlined CPU ID assigned, and `env->cpu_index' is
the virtual CPU ID. You need to modify here and recompile qemu-kvm
(or just offline cpu 2 and 3 for a 2vcpus guest ;) ).

Thanks,
Tomoki Sekiyama
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux