> From: Anthony Liguori <anthony <at> codemonkey.ws> > By standard thread scheduling, I presume you mean scheduling that > doesn't take int> From: Anthony Liguori <anthony <at> codemonkey.ws> > By standard thread scheduling, I presume you mean scheduling that > doesn't take into account IO? That is, this paper is arguing that in a > virtualization environment, you want to provide temporary > disproportionate scheduling to favor IO bound workloads over CPU bound > workloads. > I don't think you need the credit scheduler to implement this idea in > KVM. > I don't think you need the credit scheduler to implement this idea in > KVM. I tested the CFS in detail, according to Documentation/sched-design-CFS.txt The hardware is a Dell Core2 2Gmemory.. The linux kernel is 2.6.29, previous downloaded via git clone (the kvm.git tree) the default configuration is CONFIG_CGROUP_SCHED, and I did the following: # mkdir /dev/cpuctl # mount -t cgroup -ocpu none /dev/cpuctl # cd /dev/cpuctl # mkdir fast # mkdir slow #mkdir mid # echo 2048 > fast/cpu.shares # echo 1024 > slow/cpu.shares # echo 1536 > mid/cpu.shares # ./qemu-x86-64.... -hda ~/CentOS.img -smp 2 in the created VM, run two following foo process: main() { int i=10; while(1) i*= 56; } on the host machine, put all the qemu threads(get via ps -eLf | grep qemu) to the fast group # echo <thread1> > fast/tasks # echo <thread2> > fast/tasks #..... on the host machine, put the two foo process to slow cgroup #echo <foo1> > slow/tasks #echo <foo2> > slow/tasks start another SMP VM, and in the newly created vm, run foo # ./qemu-x86-64 ... -hda ~/CentOS2.img -smp 2 ..... put all the threads of the new qemu process to the mid cgroup. Use `top' to see the cpu usage. I found it does not work. Is this a bug to CFS? And I found CFS does not reserve cpu usage. That is to say, when none else is active, the active one can take all the cpu time. Is it wanted if you want to sell computer power? Regards, alex. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html