Re: [Xen-devel] [PATCH RFC 00/39] x86/KVM: Xen HVM guest support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2019-04-08 5:35 p.m., Stefano Stabellini wrote:
On Mon, 8 Apr 2019, Joao Martins wrote:
On 4/8/19 11:42 AM, Juergen Gross wrote:
On 08/04/2019 12:36, Joao Martins wrote:
On 4/8/19 7:44 AM, Juergen Gross wrote:
On 12/03/2019 18:14, Joao Martins wrote:
On 2/22/19 4:59 PM, Paolo Bonzini wrote:
On 21/02/19 12:45, Joao Martins wrote:
On 2/20/19 9:09 PM, Paolo Bonzini wrote:
On 20/02/19 21:15, Joao Martins wrote:
  2. PV Driver support (patches 17 - 39)

  We start by redirecting hypercalls from the backend to routines
  which emulate the behaviour that PV backends expect i.e. grant
  table and interdomain events. Next, we add support for late
  initialization of xenbus, followed by implementing
  frontend/backend communication mechanisms (i.e. grant tables and
  interdomain event channels). Finally, introduce xen-shim.ko,
  which will setup a limited Xen environment. This uses the added
  functionality of Xen specific shared memory (grant tables) and
  notifications (event channels).

I am a bit worried by the last patches, they seem really brittle and
prone to breakage.  I don't know Xen well enough to understand if the
lack of support for GNTMAP_host_map is fixable, but if not, you have to
define a completely different hypercall.

I guess Ankur already answered this; so just to stack this on top of his comment.

The xen_shim_domain() is only meant to handle the case where the backend
has/can-have full access to guest memory [i.e. netback and blkback would work
with similar assumptions as vhost?]. For the normal case, where a backend *in a
guest* maps and unmaps other guest memory, this is not applicable and these
changes don't affect that case.

IOW, the PV backend here sits on the hypervisor, and the hypercalls aren't
actual hypercalls but rather invoking shim_hypercall(). The call chain would go
more or less like:

<netback|blkback|scsiback>
  gnttab_map_refs(map_ops, pages)
    HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref,...)
      shim_hypercall()
        shim_hcall_gntmap()

Our reasoning was that given we are already in KVM, why mapping a page if the
user (i.e. the kernel PV backend) is himself? The lack of GNTMAP_host_map is how
the shim determines its user doesn't want to map the page. Also, there's another
issue where PV backends always need a struct page to reference the device
inflight data as Ankur pointed out.

Ultimately it's up to the Xen people.  It does make their API uglier,
especially the in/out change for the parameter.  If you can at least
avoid that, it would alleviate my concerns quite a bit.

In my view, we have two options overall:

1) Make it explicit, the changes the PV drivers we have to make in
order to support xen_shim_domain(). This could mean e.g. a) add a callback
argument to gnttab_map_refs() that is invoked for every page that gets looked up
successfully, and inside this callback the PV driver may update it's tracking
page. Here we no longer have this in/out parameter in gnttab_map_refs, and all
shim_domain specific bits would be a little more abstracted from Xen PV
backends. See netback example below the scissors mark. Or b) have sort of a
translate_gref() and put_gref() API that Xen PV drivers use which make it even
more explicit that there's no grant ops involved. The latter is more invasive.

2) The second option is to support guest grant mapping/unmapping [*] to allow
hosting PV backends inside the guest. This would remove the Xen changes in this
series completely. But it would require another guest being used
as netback/blkback/xenstored, and less performance than 1) (though, in theory,
it would be equivalent to what does Xen with grants/events). The only changes in
Linux Xen code is adding xenstored domain support, but that is useful on its own
outside the scope of this work.

I think there's value on both; 1) is probably more familiar for KVM users
perhaps (as it is similar to what vhost does?) while  2) equates to implementing
Xen disagregation capabilities in KVM.

Thoughts? Xen maintainers what's your take on this?

What I'd like best would be a new handle (e.g. xenhost_t *) used as an
abstraction layer for this kind of stuff. It should be passed to the
backends and those would pass it on to low-level Xen drivers (xenbus,
event channels, grant table, ...).

So if IIRC backends would use the xenhost layer to access grants or frames
referenced by grants, and that would grok into some of this. IOW, you would have
two implementors of xenhost: one for nested remote/local events+grants and
another for this "shim domain" ?

As I'd need that for nested Xen I guess that would make it 3 variants.
Probably the xen-shim variant would need more hooks, but that should be
no problem.

I probably messed up in the short description but "nested remote/local
events+grants" was referring to nested Xen (FWIW remote meant L0 and local L1).
So maybe only 2 variants are needed?

I was planning to do that (the xenhost_t * stuff) soon in order to add
support for nested Xen using PV devices (you need two Xenstores for that
as the nested dom0 is acting as Xen backend server, while using PV
frontends for accessing the "real" world outside).

The xenhost_t should be used for:

- accessing Xenstore
- issuing and receiving events
- doing hypercalls
- grant table operations


In the text above, I sort of suggested a slice of this on 1.b) with a
translate_gref() and put_gref() API -- to get the page from a gref. This was
because of the flags|host_addr hurdle we depicted above wrt to using using grant
maps/unmaps. You think some of the xenhost layer would be ammenable to support
this case?

I think so, yes.


So exactly the kind of stuff you want to do, too.

Cool idea!

In the end you might make my life easier for nested Xen. :-)

Hehe :)

Do you want to have a try with that idea or should I do that? I might be
able to start working on that in about a month.

Ankur (CC'ed) will give a shot at it, and should start a new thread on this
xenhost abstraction layer.

If you are up for it, it would be great to write some documentation too.
We are starting to have decent docs for some PV protocols, describing a
specific PV interface, but we are lacking docs about the basic building
blocks to bring up PV drivers in general. They would be extremely
Agreed. These would be useful.

useful. Given that you have just done the work, you are in a great
position to write those docs. Even bad English would be fine, I am sure
somebody else could volunteer to clean-up the language. Anything would
help :-)
Can't make any promises on this yet but I will see if I can convert
notes I made into something useful for the community.


Ankur


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel





[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux