Re: [RFC PATCH v6 01/92] kvm: introduce KVMI (VM introspection subsystem)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Aug 13, 2019 at 02:09:51PM +0200, Paolo Bonzini wrote:
> On 13/08/19 13:57, Adalbert Lazăr wrote:
> >> The refcounting approach seems a bit backwards, and AFAICT is driven by
> >> implementing unhook via a message, which also seems backwards.  I assume
> >> hook and unhook are relatively rare events and not performance critical,
> >> so make those the restricted/slow flows, e.g. force userspace to quiesce
> >> the VM by making unhook() mutually exclusive with every vcpu ioctl() and
> >> maybe anything that takes kvm->lock. 
> >>
> >> Then kvmi_ioctl_unhook() can use thread_stop() and kvmi_recv() just needs
> >> to check kthread_should_stop().
> >>
> >> That way kvmi doesn't need to be refcounted since it's guaranteed to be
> >> alive if the pointer is non-null.  Eliminating the refcounting will clean
> >> up a lot of the code by eliminating calls to kvmi_{get,put}(), e.g.
> >> wrappers like kvmi_breakpoint_event() just check vcpu->kvmi, or maybe
> >> even get dropped altogether.
> > 
> > The unhook event has been added to cover the following case: while the
> > introspection tool runs in another VM, both VMs, the virtual appliance
> > and the introspected VM, could be paused by the user. We needed a way
> > to signal this to the introspection tool and give it time to unhook
> > (the introspected VM has to run and execute the introspection commands
> > during this phase). The receiving threads quits when the socket is closed
> > (by QEMU or by the introspection tool).

Why does closing the socket require destroying the kvmi object?  E.g. can
it be marked as defunct or whatever and only fully removed on a synchronous
unhook from userspace?  Re-hooking could either require said unhook, or
maybe reuse the existing kvmi object with a new socket.

> > It's a bit unclear how, but we'll try to get ride of the refcount object,
> > which will remove a lot of code, indeed.
> 
> You can keep it for now.  It may become clearer how to fix it after the
> event loop is cleaned up.

By event loop, do you mean the per-vCPU jobs list?



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux