Re: [PATCH] drivers: gpio: add virtio-gpio guest driver

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 17.06.21 05:59, Viresh Kumar wrote:

> Okay, we figured out now that you _haven't_ subscribed to virtio lists
> and so your stuff never landed in anyone's inbox. But you did send
> something and didn't completely went away.

Actually, I am subscribed in the list. We already had debates on it,
including on your postings (but also other things). And the ascii
version of the spec actually landed on the list last year, we had
discussions about it there.

I've just had the problem that my patches didn't go through, which is
very strange, since I actually am on the list and other mails of mine
went through all the time. I'm now suspecting it's triggered by some
subtle difference between my regular mail clients and git send-email.

> Since you started this all and still want to do it, I will take my
> patches back and let you finish with what you started. I will help
> review them.

Thank you very much.

Please don't me wrong, I really don't wanna any kind of power play, just
wanna get an technically good solution. If there had been any mis-
understandings at that point, I'm officially saying sorry here.

Let's be friends.

You mentioned you've been missing with my spec. Please come foreward and
tell us what exactly you're missing and what your use cases are.

Note that I've intentionally left off certain "more sophisticated"
functionality we find on *some* gpio controllers, eg. per-line irq
masking, pinmux settings for several reasons, e.g.:

* those are only implemented by some hardware
* often implemented in or at least need to be coordinated with other
  pieces of hw (e.g. in SoCs, pinmux is usually done in a separate
  device)
* it shall be possible to support even the most simple devices and
  have the more sophisticated things totally optional. minium
  requirements for silicon implementations should be the lowest possible
  (IOW: minimal number of logic gates)

>> You sound like a politician that tries to push an hidden agenda,
>> made by some secret interest group in the back room, against the
>> people - like "resistance is futile".
>
> :)

Perhaps I've been a bit overreacting at that point. But: this is really
that kind of talking we hear from politicians and corporate leaders
since many years, whenever they wanna push something through that we the
people don't want. Politicians use that as a social engineering tool for
demotivating any resistance. Over heare in Germany this even had become
a meme, and folks from CCC made a radio show about and named by that
(the German word is "alternativlos" - in english: without any
alternative). No idea about other countries, maybe it's a cultural
issue, but over here, those kind of talking had become a red light.

Of course, I never intended to accuse you of being one of these people.
Sorry if there's been misunderstanding.


Let's get back to your implementation: you've mentioned you're routing
raw virtio traffic into userland, to some other process (outside VMMs
like qemu) - how exactly are you doing that ?

That could be interesting for completely different scenarios. For
example, I'm currently exploring how to get VirGL running between separate processes running under the same kernel instance (fow now we
only have the driver side inside VM and the device outside it), means
driver and device are running as separate processes.

The primary use case are containers that shall have really GPU generic
drivers, not knowing anything about the actual hardware on the host.
Currently, container workloads wanting to use a GPU need to have special
drivers for exactly the HW the host happens to have. This makes generic,
portable container images a tuff problem.

I haven't digged deeply into the matter, but some virtio-tap transport
could be an relatively easy (probably not the most efficient) way to
solve this problem. In that scanario it would like this:

* we have a "virgl server" (could be some X or wayland application, or
  completely own compositor) opens up the device-end of an "virtio-tap"
  transport and attaches its virtio-gpio device emulation on it.
* "virtio-tap" now creates a driver-end, kernel probes an virtio-gpu
  instance on this (also leading to a new DRI device)
* container runtime picks the new DRI device and maps it into the
  container(s)
  [ yet open question, whether one DRI device for many containers
    is enough ]
* container application sees that virtio-gpu DRI device and speaks to
  it (mesa->virgl backend)
* the "virgl-server" receives buffers and commands from via virtio and
  sends them to the host's GL or Gallium API.

Once we're already there, we might think whether it could make sense
putting virtio routing into kvm itself, instead of letting qemu catch
page faults and virtual irqs. Yet have to see whether that's a good
idea, but I can imagine some performance improvements here.



--mtx

--
---
Hinweis: unverschlüsselte E-Mails können leicht abgehört und manipuliert
werden ! Für eine vertrauliche Kommunikation senden Sie bitte ihren
GPG/PGP-Schlüssel zu.
---
Enrico Weigelt, metux IT consult
Free software and Linux embedded engineering
info@xxxxxxxxx -- +49-151-27565287



[Index of Archives]     [Linux SPI]     [Linux Kernel]     [Linux ARM (vger)]     [Linux ARM MSM]     [Linux Omap]     [Linux Arm]     [Linux Tegra]     [Fedora ARM]     [Linux for Samsung SOC]     [eCos]     [Linux Fastboot]     [Gcc Help]     [Git]     [DCCP]     [IETF Announce]     [Security]     [Linux MIPS]     [Yosemite Campsites]

  Powered by Linux