Re: [RFC 00/10] KVM: Add TMEM host/guest support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 06/12/2012 07:40 PM, Dan Magenheimer wrote:
> > From: Avi Kivity [mailto:avi@xxxxxxxxxx]
> > Subject: Re: [RFC 00/10] KVM: Add TMEM host/guest support
>
> I started off with a point-by-point comment on most of your
> responses about the tradeoffs of how tmem works, but decided
> it best to simply say we disagree and kvm-tmem will need to
> prove who is right.

That is why I am asking for benchmarks.

> > >> Sorry, no, first demonstrate no performance regressions, then we can
> > >> talk about performance improvements.
> > >
> > > Well that's an awfully hard bar to clear, even with any of the many
> > > changes being merged every release into the core Linux mm subsystem.
> > > Any change to memory management will have some positive impacts on some
> > > workloads and some negative impacts on others.
> > 
> > Right, that's too harsh.  But these benchmarks show a doubling (or even
> > more) of cpu overhead, and that is whether the cache is effective or
> > not.  That is simply way too much to consider.
>
> One point here... remember you have contrived a worst case
> scenario.  The one case Sasha provided outside of that contrived
> worst case, as you commented, looks very nice.  So the costs/benefits
> remain to be seen over a wider set of workloads.

While the workload is contrived, decreasing benefits with increasing
cache size is nothing new.  And here tmem is increasing the cost of all
caching, without guaranteeing any return.

> Also, even that contrived case should look quite a bit better
> with WasActive properly implemented.

I'll be happy to see benchmarks of improved code.

> > Look at the block, vfs, and mm layers.  Huge pains have been taken to
> > batch everything and avoid per-page work -- 20 years of not having
> > enough cycles.  And here you throw all this out of the window with
> > per-page crossing of the guest/host boundary.
>
> Well, to be fair, those 20 years of effort were because
> (1) disk seeks are a million times slower than an in-RAM page
> copy and (2) SMP systems were rare and expensive.  The
> world changes...

I don't see how smp matters here.  You have more cores, you put more
work on them, you don't expect the OS or hypervisor to consume them for
you.  In any case you're consuming this cpu on the same core as the
guest, so you're reducing throghput (if caching is ineffective).

Disks are still slow, even fast flash arrays, but tmem is not the only
solution to that problem.  You say ballooning has not proven itself in
this area but that doesn't mean it has been proven not to work; and it
doesn't suffer from the inefficiency of crossing the guest/host boundary.

-- 
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux