Am 07.08.22 um 21:10 schrieb Rob Clark:
On Sun, Aug 7, 2022 at 11:05 AM Christian König
<ckoenig.leichtzumerken@xxxxxxxxx> wrote:
Am 07.08.22 um 19:56 schrieb Rob Clark:
On Sun, Aug 7, 2022 at 10:38 AM Christian König
<ckoenig.leichtzumerken@xxxxxxxxx> wrote:
[SNIP]
And exactly that was declared completely illegal the last time it came
up on the mailing list.
Daniel implemented a whole bunch of patches into the DMA-buf layer to
make it impossible for KVM to do this.
This issue isn't really with KVM, it is not making any CPU mappings
itself. KVM is just making the pages available to the guest.
Well I can only repeat myself: This is strictly illegal.
Please try this approach with CONFIG_DMABUF_DEBUG set. I'm pretty sure
you will immediately run into a crash.
See this here as well
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Felixir.bootlin.com%2Flinux%2Fv5.19%2Fsource%2Fdrivers%2Fdma-buf%2Fdma-buf.c%23L653&data=05%7C01%7Cchristian.koenig%40amd.com%7Cc1392f76994f4fef7c7f08da78a86283%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637954961892996770%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=T3g9ICZizCWXkIn5vEnhFYs38Uj37jCwHjMb1s3UtOw%3D&reserved=0
Daniel intentionally added code to mangle the page pointers to make it
impossible for KVM to do this.
I don't believe KVM is using the sg table, so this isn't going to stop
anything ;-)
Then I have no idea how KVM actually works. Can you please briefly
describe that?
If the virtio/virtgpu UAPI was build around the idea that this is
possible then it is most likely fundamental broken.
How else can you envision mmap'ing to guest userspace working?
Well long story short: You can't.
See userspace mappings are not persistent, but rather faulted in on
demand. The exporter is responsible for setting those up to be able to
add reverse tracking and so can invalidate those mappings when the
backing store changes.
The guest kernel is the one that controls the guest userspace pagetables,
not the host kernel. I guess your complaint is about VMs in general,
but unfortunately I don't think you'll convince the rest of the
industry to abandon VMs ;-)
I'm not arguing against the usefulness of VM, it's just that what you
describe here technically is just utterly nonsense as far as I can tell.
I have to confess that I'm totally lacking how this KVM mapping works,
but when the struct pages pointers from the sg_table are not used I see
two possibilities what was implemented here:
1. KVM is somehow walking the page tables to figure out what to map into
the guest VM.
This would be *HIGHLY* illegal and not just with DMA-buf, but with
pretty much a whole bunch of other drivers/subsystems as well.
In other words it would be trivial for the guest to take over the
host with that because it doesn't take into account that the underlying
backing store of DMA-buf and other mmaped() areas can change at any time.
2. The guest VM triggers the fault handler for the mappings to fill in
their page tables on demand.
That would actually work with DMA-buf, but then the guest needs to
somehow use the caching attributes from the host side and not use it's own.
Because otherwise you can't accommodate that the exporter is
changing those caching attributes.
But more seriously, let's take a step back here.. what scenarios are
you seeing this being problematic for? Then we can see how to come up
with solutions. The current situation of host userspace VMM just
guessing isn't great.
Well "isn't great" is a complete understatement. When KVM/virtio/virtgpu
is doing what I guess they are doing here then that is a really major
security hole.
And sticking our heads in the sand and
pretending VMs don't exist isn't great. So what can we do? I can
instead add a msm ioctl to return this info and solve the problem even
more narrowly for a single platform. But then the problem still
remains on other platforms.
Well once more: This is *not* MSM specific, you just absolutely *can't
do that* for any driver!
I'm just really wondering what the heck is going on here, because all of
this was discussed in lengthy before on the mailing list and very
bluntly rejected.
Either I'm missing something (that's certainly possible) or we have a
strong case of somebody implementing something without thinking about
all the consequences.
Regards,
Christian.
Slightly implicit in this is that mapping dma-bufs to the guest won't
work for anything that requires DMA_BUF_IOCTL_SYNC for coherency.. we
could add a possible return value for DMA_BUF_INFO_VM_PROT indicating
that the buffer does not support mapping to guest or CPU access
without DMA_BUF_IOCTL_SYNC. Then at least the VMM can fail gracefully
instead of subtly.
BR,
-R