> From: Sasha Levin [mailto:levinsasha928@xxxxxxxxx] > Subject: Re: [RFC 00/10] KVM: Add TMEM host/guest support > > I re-ran benchmarks in a single user environment to get more stable results, increasing the test files > to 50gb each. Nice results Sasha! The non-increase in real and the significant increase in sys demonstrates that tmem should have little or no impact as long as there are sufficient unused CPU cycles.... since tmem is most active on I/O bound workloads when there tends to be lots of idle cpu time, tmem is usually "free". But if KVM perfectly load balances across the sum of all guests so that there is little or no cpu idle time (rare but possible), there will be a measurable impact. For a true worst case analysis, try running cpus=1. (One can argue that anyone who runs KVM on a single cpu system deserves what they get ;-) But, the "WasActive" patch[1] (if adapted slightly for the KVM-TMEM patch) should eliminate the negative impact on systime of streaming workloads even on cpus=1. > From: Avi Kivity [mailto:avi@xxxxxxxxxx] > <this comment was on Sasha's first round of benchmarking> > These results give about 47 usec per page system time (seems quite > high), whereas the difference is 0.7 user per page (seems quite low, for > 1 or 2 syscalls per page). Can you post a snapshot of kvm_stat while > this is running? Note that the userspace difference is likely all noise. No tmem/zcache activites should be done in userspace. All the activites result from either a page fault or kswapd. Since each streamed page (assuming no WasActive patch) should result in one hypercall and one lz01x page compression, I suspect that 47usec is a good estimate of the sum of those on Sasha's machine. [1] https://lkml.org/lkml/2012/1/25/300 -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html