Am 14.01.22 um 18:40 schrieb Felix Kuehling:
Am 2022-01-14 um 12:26 p.m. schrieb Christian König:
Am 14.01.22 um 17:44 schrieb Daniel Vetter:
Top post because I tried to catch up on the entire discussion here.
So fundamentally I'm not opposed to just close this fork() hole once and
for all. The thing that worries me from a upstream/platform pov is
really
only if we don't do it consistently across all drivers.
So maybe as an idea:
- Do the original patch, but not just for ttm but all gem rendernode
drivers at least (or maybe even all gem drivers, no idea), with the
below discussion cleaned up as justification.
I know of at least one use case which this will break.
A couple of years back we had a discussion on the Mesa mailing list
because (IIRC) Marek introduced a background thread to push command
submissions to the kernel.
That broke because some compositor used to initialize OpenGL and then
do a fork(). This indeed worked previously (no GPUVM at that time),
but with the addition of the backround thread obviously broke.
The conclusion back then was that the compositor is broken and needs
fixing, but it still essentially means that there could be people out
there with really old userspace where this setting would just break
the desktop.
I'm not really against that change either, but at least in theory we
could make fork() work perfectly fine even with VMs and background
threads.
You may regret this if you ever try to build a shared virtual address
space between GPU and CPU. Then you have two processes (parent and
child) sharing the same render context and GPU VM address space. But the
CPU address spaces are different. You can't maintain consistent shared
virtual address spaces for both processes when the GPU address space is
shared between them.
That's actually not much of a problem.
All you need to do is to use pthread_atfork() and do the appropriate
action in parent/child to clean up your context:
https://man7.org/linux/man-pages/man3/pthread_atfork.3.html
The rest is just to make sure that all shared and all private data are
kept separate all the time. Sharing virtual memory is already done for
decades this way, it's just that nobody ever did it with a statefull
device like GPUs.
Regards,
Christian.
Regards,
Felix