Re: [PATCH 5/5] ioeventfd: Introduce KVM_IOEVENTFD_FLAG_SOCKET

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2011-07-14 at 15:46 +0300, Avi Kivity wrote:
> On 07/14/2011 03:32 PM, Sasha Levin wrote:
> > On Thu, 2011-07-14 at 14:54 +0300, Avi Kivity wrote:
> > >  On 07/14/2011 01:30 PM, Pekka Enberg wrote:
> > >  >  We want to use 8250 emulation instead of virtio-serial because it's
> > >  >  more compatible with kernel debugging mechanisms. Also, it makes
> > >  >  debugging virtio code much easier when we don't need to use virtio to
> > >  >  deliver console output while debugging it. We want to make it fast so
> > >  >  that we don't need to switch over to another console type after early
> > >  >  boot.
> > >  >
> > >  >  What's unreasonable about that?
> > >
> > >  Does virtio debugging really need super-fast serial?  Does it need
> > >  serial at all?
> > >
> >
> > Does it need super-fast serial? No, although it's nice. Does it need
> > serial at all? Definitely.
> 
> Why?  virtio is mature.  It's not some early boot thing which fails and 
> kills the guest.  Even if you get an oops, usually the guest is still alive.

virtio is mature, /tools/kvm isn't :)

> 
> > It's not just virtio which can fail running on virtio-console, it's also
> > the threadpool, the eventfd mechanism and even the PCI management
> > module. You can't really debug it if you can't depend on your debugging
> > mechanism to properly work.
> 
> Wait, those are guest things, not host things.

Yes, as you said in the previous mail, both KVM and virtio are very
stable. /tools/kvm was the one who was being debugged most of the time.


> > So far, serial is the simplest, most effective, and never-failing method
> > we had for working on guests, I don't see how we can work without it at
> > the moment.
> 
> I really can't remember the last time I used the serial console for the 
> guest.  In the early early days, sure, but now?
> 

I don't know, if it works fine why not use it when you need simple
serial connection?

It's also useful for kernel hackers who break early boot things :)

> > I agree here that the performance even with 256 vcpus would be terrible
> > and no 'real' users would be doing that until the infrastructure could
> > provide reasonable performance.
> >
> > The two uses I see for it are:
> >
> > 1. Stressing out the usermode code. One of the reasons qemu can't
> > properly do 64 vcpus now is not just due to the KVM kernel code, it's
> > also due to qemu itself. We're trying to avoid doing the same
> > with /tools/kvm.
> 
> It won't help without a 1024 cpu host.  As soon as you put a real 
> workload on the guest, it will thrash and any scaling issue in qemu or 
> tools/kvm will be drowned in the noise.
> 
> > 2. Preventing future scalability problems. Currently we can't do 1024
> > vcpus because it breaks coalesced MMIO - which is IMO not a valid reason
> > for not scaling up to 1024 vcpus (and by scaling I mean running without
> > errors, without regards to performance).
> 
> That's not what scaling means (not to say that it wouldn't be nice to 
> fix coalesced mmio).
> 
> btw, why are you so eager to run 1024 vcpu guests? usually, if you have 
> a need for such large systems, you're really performance sensitive.  
> It's not a good case for virtualization.
> 
> 

I may have went too far with 1024, I have only tested it on 254 vcpus so
far - I'll change that in my patch.

It's also not just a KVM issue. Take for example the RCU issue which we
were able to detect with /tools/kvm just by trying more than 30 vcpus
and noticing that RCU was broken with a recent kernel.

Testing the kernel on guests with large amount of vcpus or virtual
memory might prove beneficial not only for KVM itself.

-- 

Sasha.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux