Hi Wei, On 09/14/2017 10:07 PM, Wei Wang wrote: > On 09/14/2017 12:17 AM, Nitesh Narayan Lal wrote: >> Changelog in v2: >> - Addressed comments provided in v1 >> - Implementation to pass the global hyperlist (carrying >> pages which are to be freed) to the host by using existing >> virtio-balloon infrastructure (deflate_vq). >> >> I am using synchronous virtqueue_kick API for which changes in >> virtio_ring.c and virtio.h are picked from Wei Wang's patch-set for >> Virtio-balloon enhancement [2]. ("Wei, How would you like me to >> credit you in the final patch?"). I am using this API because it >> doesn't requires any memory allocation in order to pass the list of >> pfns to the host. Which is required because of the seqlock used in >> arch_free_page to prevent memory allocations. >> >> Query: >> - So far I don't have any implementation in QEMU. I have >> added few prints in QEMU balloon code to ensure that the guest page >> hinting kick is landing in the right location. As per my >> understanding on using deflate_vq QEMU's >> "virtio_balloon_handle_output" in hw/virtio/virtio-balloon.c should >> be invoked after the kick. But in my case for some reason that is not >> getting invoked. Any suggestions where am I going wrong? >> >> [1] http://www.spinics.net/lists/kvm/msg153666.html >> [2] http://www.spinics.net/lists/kvm/msg152734.html; >> > > Hi Nitesh, > > I had a quick look of this approach, and have some high-level questions: > > 1) What's the usage of the feature, in addition to accelerating live > migration? I believe this solution will be useful in every use case where we do not want to manually intervene in order to reclaim/allocate memory via virtio-balloon and would like it to be done in an automated manner. > > 2) Are the free page hints continuously added to and removed from the > per CPU arrays during the whole lifecycle of the guest whenever > alloc/free is invoked? Yes they are added continuously but whether they are required to be removed from the list or not is decided only when the list is full. > > 3) The per-CPU arrays are sync-ed to a hypervisor page array under a > lock. If all the CPUs happen to do the sync at the same time, the > later ones may be possible to spin too long? Yes, it may. Although this interface is the first version, and you are looking at a batched and/or lockless interface. For instance instead of doing one hypercall per page, I could pass the hypervisor a page full of pfn/length tuples. > > Best, > Wei -- Regards Nitesh
Attachment:
signature.asc
Description: OpenPGP digital signature