> From: Avi Kivity [mailto:avi@xxxxxxxxxx] > Subject: Re: [RFC 00/10] KVM: Add TMEM host/guest support I started off with a point-by-point comment on most of your responses about the tradeoffs of how tmem works, but decided it best to simply say we disagree and kvm-tmem will need to prove who is right. > >> Sorry, no, first demonstrate no performance regressions, then we can > >> talk about performance improvements. > > > > Well that's an awfully hard bar to clear, even with any of the many > > changes being merged every release into the core Linux mm subsystem. > > Any change to memory management will have some positive impacts on some > > workloads and some negative impacts on others. > > Right, that's too harsh. But these benchmarks show a doubling (or even > more) of cpu overhead, and that is whether the cache is effective or > not. That is simply way too much to consider. One point here... remember you have contrived a worst case scenario. The one case Sasha provided outside of that contrived worst case, as you commented, looks very nice. So the costs/benefits remain to be seen over a wider set of workloads. Also, even that contrived case should look quite a bit better with WasActive properly implemented. > Look at the block, vfs, and mm layers. Huge pains have been taken to > batch everything and avoid per-page work -- 20 years of not having > enough cycles. And here you throw all this out of the window with > per-page crossing of the guest/host boundary. Well, to be fair, those 20 years of effort were because (1) disk seeks are a million times slower than an in-RAM page copy and (2) SMP systems were rare and expensive. The world changes... -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html