On 10/04/2019 08:55, Ankur Arora wrote: > On 2019-04-08 10:04 p.m., Juergen Gross wrote: >> On 08/04/2019 19:31, Joao Martins wrote: >>> On 4/8/19 11:42 AM, Juergen Gross wrote: >>>> On 08/04/2019 12:36, Joao Martins wrote: >>>>> On 4/8/19 7:44 AM, Juergen Gross wrote: >>>>>> On 12/03/2019 18:14, Joao Martins wrote: >>>>>>> On 2/22/19 4:59 PM, Paolo Bonzini wrote: >>>>>>>> On 21/02/19 12:45, Joao Martins wrote: >>>>>>>>> On 2/20/19 9:09 PM, Paolo Bonzini wrote: >>>>>>>>>> On 20/02/19 21:15, Joao Martins wrote: >>>>>>>>>>> 2. PV Driver support (patches 17 - 39) >>>>>>>>>>> >>>>>>>>>>> We start by redirecting hypercalls from the backend to >>>>>>>>>>> routines >>>>>>>>>>> which emulate the behaviour that PV backends expect i.e. grant >>>>>>>>>>> table and interdomain events. Next, we add support for late >>>>>>>>>>> initialization of xenbus, followed by implementing >>>>>>>>>>> frontend/backend communication mechanisms (i.e. grant >>>>>>>>>>> tables and >>>>>>>>>>> interdomain event channels). Finally, introduce xen-shim.ko, >>>>>>>>>>> which will setup a limited Xen environment. This uses the >>>>>>>>>>> added >>>>>>>>>>> functionality of Xen specific shared memory (grant tables) and >>>>>>>>>>> notifications (event channels). >>>>>>>>>> >>>>>>>>>> I am a bit worried by the last patches, they seem really >>>>>>>>>> brittle and >>>>>>>>>> prone to breakage. I don't know Xen well enough to understand >>>>>>>>>> if the >>>>>>>>>> lack of support for GNTMAP_host_map is fixable, but if not, >>>>>>>>>> you have to >>>>>>>>>> define a completely different hypercall. >>>>>>>>>> >>>>>>>>> I guess Ankur already answered this; so just to stack this on >>>>>>>>> top of his comment. >>>>>>>>> >>>>>>>>> The xen_shim_domain() is only meant to handle the case where >>>>>>>>> the backend >>>>>>>>> has/can-have full access to guest memory [i.e. netback and >>>>>>>>> blkback would work >>>>>>>>> with similar assumptions as vhost?]. For the normal case, where >>>>>>>>> a backend *in a >>>>>>>>> guest* maps and unmaps other guest memory, this is not >>>>>>>>> applicable and these >>>>>>>>> changes don't affect that case. >>>>>>>>> >>>>>>>>> IOW, the PV backend here sits on the hypervisor, and the >>>>>>>>> hypercalls aren't >>>>>>>>> actual hypercalls but rather invoking shim_hypercall(). The >>>>>>>>> call chain would go >>>>>>>>> more or less like: >>>>>>>>> >>>>>>>>> <netback|blkback|scsiback> >>>>>>>>> gnttab_map_refs(map_ops, pages) >>>>>>>>> HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref,...) >>>>>>>>> shim_hypercall() >>>>>>>>> shim_hcall_gntmap() >>>>>>>>> >>>>>>>>> Our reasoning was that given we are already in KVM, why mapping >>>>>>>>> a page if the >>>>>>>>> user (i.e. the kernel PV backend) is himself? The lack of >>>>>>>>> GNTMAP_host_map is how >>>>>>>>> the shim determines its user doesn't want to map the page. >>>>>>>>> Also, there's another >>>>>>>>> issue where PV backends always need a struct page to reference >>>>>>>>> the device >>>>>>>>> inflight data as Ankur pointed out. >>>>>>>> >>>>>>>> Ultimately it's up to the Xen people. It does make their API >>>>>>>> uglier, >>>>>>>> especially the in/out change for the parameter. If you can at >>>>>>>> least >>>>>>>> avoid that, it would alleviate my concerns quite a bit. >>>>>>> >>>>>>> In my view, we have two options overall: >>>>>>> >>>>>>> 1) Make it explicit, the changes the PV drivers we have to make in >>>>>>> order to support xen_shim_domain(). This could mean e.g. a) add a >>>>>>> callback >>>>>>> argument to gnttab_map_refs() that is invoked for every page that >>>>>>> gets looked up >>>>>>> successfully, and inside this callback the PV driver may update >>>>>>> it's tracking >>>>>>> page. Here we no longer have this in/out parameter in >>>>>>> gnttab_map_refs, and all >>>>>>> shim_domain specific bits would be a little more abstracted from >>>>>>> Xen PV >>>>>>> backends. See netback example below the scissors mark. Or b) have >>>>>>> sort of a >>>>>>> translate_gref() and put_gref() API that Xen PV drivers use which >>>>>>> make it even >>>>>>> more explicit that there's no grant ops involved. The latter is >>>>>>> more invasive. >>>>>>> >>>>>>> 2) The second option is to support guest grant mapping/unmapping >>>>>>> [*] to allow >>>>>>> hosting PV backends inside the guest. This would remove the Xen >>>>>>> changes in this >>>>>>> series completely. But it would require another guest being used >>>>>>> as netback/blkback/xenstored, and less performance than 1) >>>>>>> (though, in theory, >>>>>>> it would be equivalent to what does Xen with grants/events). The >>>>>>> only changes in >>>>>>> Linux Xen code is adding xenstored domain support, but that is >>>>>>> useful on its own >>>>>>> outside the scope of this work. >>>>>>> >>>>>>> I think there's value on both; 1) is probably more familiar for >>>>>>> KVM users >>>>>>> perhaps (as it is similar to what vhost does?) while 2) equates >>>>>>> to implementing >>>>>>> Xen disagregation capabilities in KVM. >>>>>>> >>>>>>> Thoughts? Xen maintainers what's your take on this? >>>>>> >>>>>> What I'd like best would be a new handle (e.g. xenhost_t *) used >>>>>> as an >>>>>> abstraction layer for this kind of stuff. It should be passed to the >>>>>> backends and those would pass it on to low-level Xen drivers (xenbus, >>>>>> event channels, grant table, ...). >>>>>> >>>>> So if IIRC backends would use the xenhost layer to access grants or >>>>> frames >>>>> referenced by grants, and that would grok into some of this. IOW, >>>>> you would have >>>>> two implementors of xenhost: one for nested remote/local >>>>> events+grants and >>>>> another for this "shim domain" ? >>>> >>>> As I'd need that for nested Xen I guess that would make it 3 variants. >>>> Probably the xen-shim variant would need more hooks, but that should be >>>> no problem. >>>> >>> I probably messed up in the short description but "nested remote/local >>> events+grants" was referring to nested Xen (FWIW remote meant L0 and >>> local L1). >>> So maybe only 2 variants are needed? >> >> I need one xenhost variant for the "normal" case as today: talking to >> the single hypervisor (or in nested case: to the L1 hypervisor). >> >> Then I need a variant for the nested case talking to L0 hypervisor. >> >> And you need a variant talking to xen-shim. >> >> The first two variants can be active in the same system in case of >> nested Xen: the backends of L2 dom0 are talking to L1 hypervisor, >> while its frontends are talking with L0 hypervisor. > Thanks this is clarifying. > > So, essentially backend drivers with a xenhost_t handle, communicate > with Xen low-level drivers etc using the same handle, however, if they > communicate with frontend drivers for accessing the "real" world, > they exclusively use standard mechanisms (Linux or hypercalls)? This should be opaque to the backends. The xenhost_t handle should have a pointer to a function vector for relevant grant-, event- and Xenstore- related functions. Calls to such functions should be done via an inline function with the xenhost_t handle being one parameter, that function will then call the correct implementation. > In this scenario L2 dom0 xen-netback and L2 dom0 xen-netfront should > just be able to use Linux interfaces. But if L2 dom0 xenbus-backend > needs to talk to L2 dom0 xenbus-frontend then do you see them layered > or are they still exclusively talking via the standard mechanisms? The distinction is made via the function vector in xenhost_t. So the only change in backends needed is the introduction of xenhost_t. Whether we want to introduce xenhost_t in frontends, too, is TBD. Juergen