On 2/9/19 7:38 PM, Michael S. Tsirkin wrote: > On Fri, Feb 08, 2019 at 02:05:09PM -0800, Alexander Duyck wrote: >> On Fri, Feb 8, 2019 at 1:38 PM Michael S. Tsirkin <mst@xxxxxxxxxx> wrote: >>> On Fri, Feb 08, 2019 at 03:41:55PM -0500, Nitesh Narayan Lal wrote: >>>>>> I am also planning to try Michael's suggestion of using MAX_ORDER - 1. >>>>>> However I am still thinking about a workload which I can use to test its >>>>>> effectiveness. >>>>> You might want to look at doing something like min(MAX_ORDER - 1, >>>>> HUGETLB_PAGE_ORDER). I know for x86 a 2MB page is the upper limit for >>>>> THP which is the most likely to be used page size with the guest. >>>> Sure, thanks for the suggestion. >>> Given current hinting in balloon is MAX_ORDER I'd say >>> share code. If you feel a need to adjust down the road, >>> adjust both of them with actual testing showing gains. >> Actually I'm left kind of wondering why we are even going through >> virtio-balloon for this? > Just look at what does it do. > > It improves memory overcommit if guests are cooperative, and it does > this by giving the hypervisor addresses of pages which it can discard. > > It's just *exactly* like the balloon with all the same limitations. > >> It seems like this would make much more sense >> as core functionality of KVM itself for the specific architectures >> rather than some side thing. > Well same as balloon: whether it's useful to you at all > would very much depend on your workloads. > > This kind of cooperative functionality is good for co-located > single-tenant VMs. That's pretty niche. The core things in KVM > generally don't trust guests. > > >> In addition this could end up being >> redundant when you start getting into either the s390 or PowerPC >> architectures as they already have means of providing unused page >> hints. > Interesting. Is there host support in kvm? > > >> I have a set of patches I proposed that add similar functionality via >> a KVM hypercall for x86 instead of doing it as a part of a Virtio >> device[1]. I'm suspecting the overhead of doing things this way is >> much less then having to make multiple madvise system calls from QEMU >> back into the kernel. > Well whether it's a virtio device is orthogonal to whether it's an > madvise call, right? You can build vhost-pagehint and that can > handle requests in a VQ within balloon and do it > within host kernel directly. > > virtio rings let you pass multiple pages so it's really hard to > say which will win outright - maybe it's more important > to coalesce exits. > > Nitesh, how about trying same tests and reporting performance? Noted, I can give it a try before my next positing. > > >> One other concern that has been pointed out with my patchset that >> would likely need to be addressed here as well is what do we do about >> other hypervisors that decide to implement page hinting. We probably >> should look at making this KVM/QEMU specific code run through the >> paravirtual infrastructure instead of trying into the x86 arch code >> directly. >> >> [1] https://lkml.org/lkml/2019/2/4/903 > > So virtio is a paravirtual interface, that's an argument for > using it then. > > In any case pls copy the Cc'd crowd on future version of your patches. > -- Regards Nitesh
Attachment:
signature.asc
Description: OpenPGP digital signature