Re: [PATCH 5/5] ioeventfd: Introduce KVM_IOEVENTFD_FLAG_SOCKET

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 07/14/2011 01:30 PM, Pekka Enberg wrote:
Hi Avi,

On Thu, Jul 14, 2011 at 12:48 PM, Avi Kivity<avi@xxxxxxxxxx>  wrote:
>>  Why does that matter? Why should we keep the emulation slow if it's
>>  possible to fix it?
>
>  Fixing things that don't need fixing has a cost.  In work, in risk, and in
>  maintainability.  If you can share this cost among other users (which is
>  certainly possible with socket mmio), it may still be worth it.  But just
>  making something faster is not sufficient, it has to be faster for a
>  significant number of users.

I don't think it needs to be faster for *significant number* of users
but yes, I completely agree that we need to make sure KVM gains more
than the costs are.

Significant, for me, means it's measured in a percentage, not as digits on various limbs. 2% is a significant amount of users. 5 is not.

On Thu, Jul 14, 2011 at 12:48 PM, Avi Kivity<avi@xxxxxxxxxx>  wrote:
>>  It's a fair question to ask if the benefits
>>  outweigh the added complexity but asking as to keep serial emulation
>>  slow because *you* think it's unrealistic is well - unrealistic from
>>  your part!
>
>  Exactly where am I unrealistic?  Do you think there are many users who
>  suffer from slow serial emulation?

Again, I don't with you that there needs to be many users for this
type of feature. That said, as a maintainer you seem to think that it
is and I'm obviously OK with that.

So if you've been saying 'this is too complex for too little gain' all
this time, I've misunderstood what you've been trying to say. The way
I've read your comments is "optimizing serial console is stupid
because it's a useless feature" which obviously not true because we
find it useful!

More or less. Note "this" here is not socket mmio - but I want to see a real use case for socket mmio.

>>  *You* brought up 1024 vcpus using serial console! Obviously optimizing
>>  something like that is stupid but we never claimed that we wanted to
>>  do something like that!
>
>  Either of them, independently, is unrealistic.  The example of them together
>  was just levantine exaggeration, it wasn't meant to be taken literally.

I obviously don't agree that they're unrealistic independently.

We want to use 8250 emulation instead of virtio-serial because it's
more compatible with kernel debugging mechanisms. Also, it makes
debugging virtio code much easier when we don't need to use virtio to
deliver console output while debugging it. We want to make it fast so
that we don't need to switch over to another console type after early
boot.

What's unreasonable about that?

Does virtio debugging really need super-fast serial? Does it need serial at all?

Reasonably fast 1024 VCPUs would be great for testing kernel
configurations. KVM is not there yet so we suggested that we raise the
hard limit from current 64 VCPUs so that it's easier for people such
as ourselves to improve things. I don't understand why you think
that's unreasonable either!

You will never get reasonably fast 1024 vcpus on your laptop. As soon as your vcpus start doing useful work, they will thrash. The guest kernel expects reasonable latency on cross-cpu operations, and kvm won't be able to provide it with such overcommit. The PLE stuff attempts to mitigate some of the problem, but it's not going to work for such huge overcommit.

Every contended spin_lock() or IPI will turn into a huge spin in the worst case or a context switch in the best case. Performance will tank unless you're running some shared-nothing process-per-vcpu workload in the guest.

The only way to get reasonable 1024 vcpu performance is to run it on a 1024 cpu host. People who have such machines are usually interested in realistic workloads, and that means much smaller guests. If you do want to run 1024-on-1024, there is a lot of work in getting NUMA to function correctly; what we have now is not sufficient for large machines with large NUMA factors.

On Thu, Jul 14, 2011 at 12:48 PM, Avi Kivity<avi@xxxxxxxxxx>  wrote:
>>  As for 1024 vcpus, we already had the discussion where we explained
>>  why we thought it was a good idea not to have such a low hard vcpu
>>  limit for vcpus.
>
>  I can't say I was convinced.  It's pretty simple to patch the kernel if you
>  want to engage in such experiments.  We did find something that works out
>  (the soft/hard limits), but it's still overkill.

I thought you were convinced that KVM_CAP_MAX_VCPUS was reasonable. I
guess I misunderstood your position then.

It's "okay", but no more. If I were Linus I'd say it's scalability masturbation.

On Thu, Jul 14, 2011 at 12:48 PM, Avi Kivity<avi@xxxxxxxxxx>  wrote:
>  There's a large attitude mismatch between tools/kvm developers and kvm
>  developers (or at least me): tools/kvm is growing rapidly, adding features
>  and improving stability at a fast pace.  kvm on the other hand is mature and
>  a lot more concerned with preserving and improving stability than with
>  adding new features.  The fact is, kvm is already very feature rich and very
>  performant, so we're at a very different place in the
>  performance/features/stability scales.

Yes, we're at different places but we definitely appreciate the
stability and performance of KVM and have no interest in disrupting
that. I don't expect you to merge our patches when you think they're
risky or not worth the added complexity. So there's no attitude
mismatch there.

I simply don't agree with some of your requirements (significant
number of users) or some of the technical decisions (VCPU hard limit
at 64).

It's great to have a disagreement without descending into ugly flamewars, I appreciate that.

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux