Re: [RFC 00/10] KVM: Add TMEM host/guest support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 06/06/2012 04:07 PM, Sasha Levin wrote:
> This patch series adds support for passing TMEM commands between KVM guests
> and the host. This opens the possibility to use TMEM cross-guests and
> posibly across hosts with RAMster.
> 
> Since frontswap was merged in the 3.4 cycle, the kernel now has all facilities
> required to work with TMEM. There is no longer a dependency on out of tree
> code.
> 
> We can split this patch series into two:
> 
>  - The guest side, which is basically two shims that proxy mm/cleancache.c
>  and mm/frontswap.c requests from the guest back to the host. This is done
>  using a new KVM_HC_TMEM hypercall.
> 
>  - The host side, which is a rather small shim which connects KVM to zcache.
> 
> 
> It's worth noting that this patch series don't have any significant logic in
> it, and is mostly a collection of shims to pass TMEM commands across hypercalls.
> 
> I ran benchmarks using both the "streaming test" proposed by Avi, and some
> general fio tests. Since the fio tests showed similar results to the
> streaming test, and no anomalies, here is the summary of the streaming tests:
> 
> First, trying to stream a 26GB random file without KVM TMEM:
> real    7m36.046s
> user    0m17.113s
> sys     5m23.809s
> 
> And with KVM TMEM:
> real    7m36.018s
> user    0m17.124s
> sys     5m28.391s

These results give about 47 usec per page system time (seems quite
high), whereas the difference is 0.7 user per page (seems quite low, for
1 or 2 syscalls per page).  Can you post a snapshot of kvm_stat while
this is running?


> 
>  - No significant difference.
> 
> Now, trying to stream a 16gb file that compresses nicely, first without KVM TMEM:
> real    5m10.299s
> user    0m11.311s
> sys     3m40.139s
> 
> And a second run without dropping cache:
> real    4m33.951s
> user    0m10.869s
> sys     3m13.789s
> 
> Now, with KVM TMEM:
> real    4m55.528s
> user    0m11.119s
> sys     3m33.243s

How is the first run faster?  Is it not doing extra work, pushing pages
to the host?

> 
> And a second run:
> real    2m53.713s
> user    0m7.971s
> sys     2m29.807s

A nice result, yes.

> 
> So KVM TMEM shows a nice performance increase once it can store pages on the host.

How was caching set up?  cache=none (in qemu terms) is most
representative, but cache=writeback also allows the host to cache guest
pages, while cache=writeback with cleancache enabled in the host should
give the same effect, but with the extra hypercalls, but with an extra
copy to manage the host pagecache.  It would be good to see results for
all three settings.

-- 
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux