Re: [PATCH 5/5] ioeventfd: Introduce KVM_IOEVENTFD_FLAG_SOCKET

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2011-07-14 at 14:54 +0300, Avi Kivity wrote:
> On 07/14/2011 01:30 PM, Pekka Enberg wrote:
> > We want to use 8250 emulation instead of virtio-serial because it's
> > more compatible with kernel debugging mechanisms. Also, it makes
> > debugging virtio code much easier when we don't need to use virtio to
> > deliver console output while debugging it. We want to make it fast so
> > that we don't need to switch over to another console type after early
> > boot.
> >
> > What's unreasonable about that?
> 
> Does virtio debugging really need super-fast serial?  Does it need 
> serial at all?
> 

Does it need super-fast serial? No, although it's nice. Does it need
serial at all? Definitely.

It's not just virtio which can fail running on virtio-console, it's also
the threadpool, the eventfd mechanism and even the PCI management
module. You can't really debug it if you can't depend on your debugging
mechanism to properly work.

So far, serial is the simplest, most effective, and never-failing method
we had for working on guests, I don't see how we can work without it at
the moment.


> > Reasonably fast 1024 VCPUs would be great for testing kernel
> > configurations. KVM is not there yet so we suggested that we raise the
> > hard limit from current 64 VCPUs so that it's easier for people such
> > as ourselves to improve things. I don't understand why you think
> > that's unreasonable either!
> 
> You will never get reasonably fast 1024 vcpus on your laptop.  As soon 
> as your vcpus start doing useful work, they will thrash.  The guest 
> kernel expects reasonable latency on cross-cpu operations, and kvm won't 
> be able to provide it with such overcommit.  The PLE stuff attempts to 
> mitigate some of the problem, but it's not going to work for such huge 
> overcommit.
> 

I agree here that the performance even with 256 vcpus would be terrible
and no 'real' users would be doing that until the infrastructure could
provide reasonable performance.

The two uses I see for it are:

1. Stressing out the usermode code. One of the reasons qemu can't
properly do 64 vcpus now is not just due to the KVM kernel code, it's
also due to qemu itself. We're trying to avoid doing the same
with /tools/kvm.

2. Preventing future scalability problems. Currently we can't do 1024
vcpus because it breaks coalesced MMIO - which is IMO not a valid reason
for not scaling up to 1024 vcpus (and by scaling I mean running without
errors, without regards to performance).

-- 

Sasha.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux