Re: [PATCH v2 6/8] kvm tools: Add rwlock wrapper

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 2011-05-30 at 14:36 +0200, Ingo Molnar wrote:
> * Avi Kivity <avi@xxxxxxxxxx> wrote:
> 
> > On 05/30/2011 02:49 PM, Ingo Molnar wrote:
> > >* Avi Kivity<avi@xxxxxxxxxx>  wrote:
> > >
> > >>  On 05/30/2011 02:26 PM, Takuya Yoshikawa wrote:
> > >>  >>   >
> > >>  >>   qemu also allows having more VCPUs than cores.
> > >>  >
> > >>  >I have to check again, then :) Thank you!
> > >>  >I will try both with many VCPUs.
> > >>
> > >>  Note, with cpu overcommit the results are going to be bad.
> > >
> > >And that is good: if pushed hard enough it will trigger exciting (or
> > >obscure) bugs in the guest kernel much faster than if there's no
> > >overcommit, so it's rather useful for testing. (We made that
> > >surprising experience with -rt.)
> > >
> > >Also, such simulation would be very obviously useful if you get
> > >bugreports about 1024 or 4096 CPUs, like i do sometimes! :-) [*]
> > 
> > I'll be surprised if 1024 cpus actually boot on a reasonable 
> > machine.  Without PLE support, any busy wait (like 
> > smp_call_function_single()) turns into a delay the length scheduler 
> > time slice (or CFS's unfairness measure, I forget how it's called) 
> > - 3 or 4 orders of magnitude larger.  Even with PLE, it's 
> > significantly slower, plus 1-2 orders of magnitude loss from the 
> > overcommit itself.
> 
> You are probably right about 1024 CPUs.
> 
> Right now i can produce something similar to it: 42 vcpus on a single 
> CPU:
> 
> 	$ taskse 1 kvm run --cpus 42
> 
> And that hangs early on during bootup, around:
> 
> [    0.236000] Disabled fast string operations
> [    0.242000]  #4
> [    0.270000] Disabled fast string operations
> [    0.275000]  #5
> [    0.317000] Disabled fast string operations
> [    0.322000]  #6
> [    0.352000] Disabled fast string operations
> [    0.358000]  #7
> [    0.414000] Disabled fast string operations
> 
> The threads seem to be livelocked:
> 
>   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                    
> 22227 mingo     20   0  471g  95m  904 R 12.9  0.8   0:06.39 kvm                                        
> 22230 mingo     20   0  471g  95m  904 R 12.9  0.8   0:06.36 kvm                                        
> 22226 mingo     20   0  471g  95m  904 R 11.9  0.8   0:07.04 kvm                                        
> 22228 mingo     20   0  471g  95m  904 R 11.9  0.8   0:06.38 kvm                                        
> 22229 mingo     20   0  471g  95m  904 R 11.9  0.8   0:06.37 kvm                                        
> 22231 mingo     20   0  471g  95m  904 R 11.9  0.8   0:06.37 kvm                                        
> 22232 mingo     20   0  471g  95m  904 R 11.9  0.8   0:06.36 kvm                                        
> 22233 mingo     20   0  471g  95m  904 R 11.9  0.8   0:06.33 kvm                                        
>     7 root      -2  19     0    0    0 S  2.0  0.0   1:12.53 rcuc0             
> 
> with no apparent progress being made.

I've noticed the same issue when using a 2.6.39 kernel. Try doing the
same with the 2.6.37 kernel we have in the tree.

-- 

Sasha.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux