Re: Changing guest I/O path in KVM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Oct 10, 2013 at 02:27:05PM +0000, Hai Nguyen wrote:
> Stefan Hajnoczi <stefanha <at> gmail.com> writes:
> > > Thanks, Stefan!
> > > 
> > > In the network file system (NFS) approach you mentioned, I understand 
> that 
> > > the I/O requests will go directly from VM1 to VM2 via network before 
> > > reaching QEMU for I/O handling. Please correct if I am wrong.
> > 
> > The network I/O will go through kvm.ko and either vhost_net.ko or QEMU
> > userspace, but you can encrypt network traffic.  Again, I don't really
> > see the point since the hypervisor has access to guest RAM and CPU state
> > - it can always spy on the guest.
> > 
> > > > Anyway, QEMU doesn't have a built-in way to bounce the I/O through
> > > > another guest without seeing the data first.
> > > 
> > > I want to have I/O requests from VM1 go to VM2 first. In the current 
> design 
> > > of kvm kernel module, kvm forwards I/O requests to QEMU by setting some 
> > > fields in the 'vcpu' structure and each QEMU thread keeps checking the 
> > > content of their corresponding vcpu. Is this the part I can make changes 
> to 
> > > have my I/O path?
> > 
> > Yes.  You'll need to share the guest RAM so the other process can
> > read/write guest I/O buffers.
> > 
> > Stefan
> 
> 
> Thank you very much!
> 
> We are working with a threat model where we do trust the KVM hypervisor. 
> However, we would like to route the I/O from our VM1 into another VM2, where 
> it can service the I/O appropriately (e.g., do intrusion detection, etc). 
> For performance reasons, we'd like this I/O path to go directly from VM1 to 
> VM2, rather than VM1 to VM2 via QEMU.
> 
> Is this doable?

Technically doable, yes.  Whether it's something that can be merged
upstream depends on the quality of patches, how invasive it is, whether
there is a performance cost to the majority of users who don't use it,
etc.

For networking the Linux kernel should already have features you can use
like tunnels, port mirroring, or openvswitch.  I don't think you need to
modify KVM for this.

For disk I/O the options are, in order of invasiveness:
1. Point to the guest at network storage that is provided by the
   inspection guest.  Con: guest requires configuration
2. Use the NBD client in QEMU to send I/O to the inspection guest.
   Pro: the guest does not need special configuration
   Con: may be slow
3. Write a QEMU block driver that offers the inspection functionality
   you need.
   Con: requires modifying QEMU
4. Add I/O redirection to KVM/QEMU.
   Pro: potentially best performance
   Con: most invasive, highest risk, most effort required

Finally, read-only point-in-time snapshots are being added to QEMU
("image fleecing").  If a dirty bitmap feature is also contributed, you
could perform efficient offline disk inspection (i.e. in the background
instead of in the I/O path).

I hope this helps you decide how to approach this.  Offline inspection
seems like the cleanest and most supportable approach but the dirty
bitmap API has not been written and the image fleecing code is not fully
merged yet - perhaps this is an area you'd like to contribute in?

Stefan
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux