Re: [PATCH v6 13/21] gunyah: vm_mgr: Introduce basic VM Manager

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 2, 2022, at 19:44, Elliot Berman wrote:
> On 11/2/2022 12:31 AM, Arnd Bergmann wrote:
>>> +static long gh_dev_ioctl_create_vm(unsigned long arg)
>>> +{
>>> +	struct gunyah_vm *ghvm;
>>> +	struct file *file;
>>> +	int fd, err;
>>> +
>>> +	/* arg reserved for future use. */
>>> +	if (arg)
>>> +		return -EINVAL;
>> 
>> Do you have something specific in mind here? If 'create'
>> is the only command you support, and it has no arguments,
>> it would be easier to do it implicitly during open() and
>> have each fd opened from /dev/gunyah represent a new VM.
>> 
>
> I'd like the argument here to support different types of virtual 
> machines. I want to leave open what "different types" can be in case 
> something new comes up in the future, but immediately "different type" 
> would correspond to a few different authentication mechanisms for 
> virtual machines that Gunyah supports.
>
> In this series, I'm only supporting unauthenticated virtual machines 
> because they are the simplest to get up and running from a Linux 
> userspace. When I introduce the other authentication mechanisms, I'll 
> expand much more on how they work, but I'll give quick overview here. 
> Other authentication mechanisms that are currently supported by Gunyah 
> are "protected VM" and, on Qualcomm platforms, "PIL/carveout VMs". 
> Protected VMs are *loosely* similar to the protected firmware design for 
> KVM and intended to support Android virtualization use cases. 
> PIL/carevout VMs are special virtual machines that can run on Qualcomm 
> firmware which authenticate in a way similar to remoteproc firmware (MDT 
> loader).

Ok, thanks for the background. Having different types of virtual
machines does mean that you may need some complexity, but I would
still lean towards using the simpler context management of opening
the /dev/gunyah device node to get a new context, and then using
ioctls on each fd to manage that context, instead of going through
the extra indirection of having a secondary 'open context' command
that always requires opening two file descriptors.

>> I'm correct, you can just turn the entire bus/device/driver
>> structure within your code into simple function calls, where
>> the main code calls vm_mgr_probe() as an exported function
>> instead of creating a device.
>
> Ack. I can do this, although I am nervous about this snowballing into a 
> situation where I have a mega-module.
>
>  > Please stop beating everything in a single module.
>
> https://lore.kernel.org/all/250945d2-3940-9830-63e5-beec5f44010b@xxxxxxxxxx/

I see you concern, but I wasn't suggesting having everything
in one module either. There are three common ways of splitting
things into separate modules:

- I suggested having the vm_mgr module as a library block that
  exports a few symbols which get used by the core module. The
  module doesn't do anything on its own, but loading the core
  module forces loading the vm_mgr.

- Alternatively one can do the opposite, and have symbols
  exported by the core module, with the vm_mgr module using
  it. This would make sense if you commonly have the core
  module loaded on virtual machines that do not need to manage
  other VMs.

- The method you have is to have a lower "bus" level that
  abstracts device providers from consumers, with both sides
  hooking into the bus. This makes sense for physical buses
  like PCI or USB where both the host driver and the function
  driver are unaware of implementation details of the other,
  but in your case it does not seem like a good fit.

        Arnd



[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux