Re: [PATCH 0/6] RFC Support hot device unplug in amdgpu

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, May 13, 2020 at 10:32:56AM -0400, Andrey Grodzovsky wrote:
> 
> On 5/11/20 5:54 AM, Daniel Vetter wrote:
> > On Sat, May 09, 2020 at 02:51:44PM -0400, Andrey Grodzovsky wrote:
> > > This RFC is a more of a proof of concept then a fully working solution as there are a few unresolved issues we are hopping to get advise on from people on the mailing list.
> > > Until now extracting a card either by physical extraction (e.g. eGPU with thunderbold connection or by emulation through syfs -> /sys/bus/pci/devices/device_id/remove)
> > > would cause random crashes in user apps. The random crashes in apps were mostly due to the app having mapped a device backed BO into it's adress space was still
> > > trying to access the BO while the backing device was gone.
> > > To answer this first problem Christian suggested to fix the handling of mapped memory in the clients when the device goes away by forcibly unmap all buffers
> > > the user processes has by clearing their respective VMAs mapping the device BOs. Then when the VMAs try to fill in the page tables again we check in the fault handler
> > > if the device is removed and if so, return an error. This will generate a SIGBUS to the application which can then cleanly terminate.
> > > This indeed was done but this in turn created a problem of kernel OOPs were the OOPSes were due to the fact that while the app was terminating because of the SIGBUS
> > > it would trigger use after free in the driver by calling to accesses device structures that were already released from the pci remove sequence.
> > > This we handled by introducing a 'flush' seqence during device removal were we wait for drm file reference to drop to 0 meaning all user clients directly using this device terminated.
> > > With this I was able to cleanly emulate device unplug with X and glxgears running and later emulate device plug back and restart of X and glxgears.
> > > 
> > > But this use case is only partial and as I see it all the use cases are as follwing and the questions it raises.
> > > 
> > > 1) Application accesses a BO by opening drm file
> > > 	1.1) BO is mapped into applications address space (BO is CPU visible) - this one we have a solution for by invaldating BO's CPU mapping casuing SIGBUS
> > > 	     and termination and waiting for drm file refcound to drop to 0 before releasing the device
> > > 	1.2) BO is not mapped into applcation address space (BO is CPU invisible) - no solution yet because how we force the application to terminate in this case ?
> > > 
> > > 2) Application accesses a BO by importing a DMA-BUF
> > > 	2.1)  BO is mapped into applications address space (BO is CPU visible) - solution is same as 1.1 but instead of waiting for drm file release we wait for the
> > > 	      imported dma-buf's file release
> > > 	2.2)  BO is not mapped into applcation address space (BO is CPU invisible) - our solution is to invalidate GPUVM page tables and destroy backing storage for
> > >                all exported BOs which will in turn casue VM faults in the importing device and then when the importing driver will try to re-attach the imported BO to
> > > 	      update mappings we return -ENODEV in the import hook which hopeffuly will cause the user app to terminate.
> > > 
> > > 3) Applcation opens a drm file or imports a dma-bud and holds a reference but never access any BO or does access but never more after device was unplug - how would we
> > >     force this applcation to termiante before proceeding with device removal code ? Otherwise the wait in pci remove just hangs for ever.
> > > 
> > > The attached patches adress 1.1, 2.1 and 2.2, for now only 1.1 fully tested and I am still testing the others but I will be happy for any advise on all the
> > > described use cases and maybe some alternative and better (more generic) approach to this like maybe obtaining PIDs of relevant processes through some revere
> > > mapping from device file and exported dma-buf files and send them SIGKILL - would this make more sense or any other method ?
> > > 
> > > Patches 1-3 address 1.1
> > > Patch 4 addresses 2.1
> > > Pathces 5-6 address 2.2
> > > 
> > > Reference: https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.freedesktop.org%2Fdrm%2Famd%2F-%2Fissues%2F1081&data=02%7C01%7Candrey.grodzovsky%40amd.com%7Cf6eec90e9da144cb772a08d7f5921ec2%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637247880251844517&sdata=QBGIbm1KLysglvRAvoiek8jBcNLE%2B4J7gVGDAbZD5Jw%3D&reserved=0
> > So we've been working on this problem for a few years already (but it's
> > still not solved), I think you could have saved yourselfs some typing.
> > 
> > Bunch of things:
> > - we can't wait for userspace in the hotunplug handlers, that might never
> >    happen. The correct way is to untangle the lifetime of your hw driver
> >    for a specific struct pci_device from the drm_device lifetime.
> >    Infrastructure is all there now, see drm_dev_get/put, drm_dev_unplug and
> >    drm_dev_enter/exit.
> 
> this
> 
> To be sure I understood you - do you mean that we should disable/shutdown
> any HW related stuff such as interrupts disable, any shutdown related device
> registers programming and io regions unmapping during pci remove sequence
> (in our case amdgpu_pci_remove) while keeping all the drm/amdgpu structures
> around in memory until drm_dev_put refocunt drop to 0 and
> &drm_driver.release is called thus avoiding any user after free oopses when
> last user reference is dropped ?

Yes.

> Is there any point in doing any HW programming to shutdown device if device
> is already removed anyway (i assume that if driver hook for pci remove  is
> called and it's a physical remove the device is already gone, no ?)

No, all that does is result in bus timeouts, which take forever. Plus
increased chances that the driver gets confused about the values it reads
(for pci all you get is 0xffffffff, no error value, usb is a lot better
here because it's explicit packets and streams where you can get explicit
errors and bail out).

The trouble is that developers still expect you to shut down hw when they
unload the driver for testing, but I think we've discussed a reasonable
solution for that for pci drivers somewhere in this thread.

> What happens if drm_dev_put doesn't drop to 0 before the device is plugged
> back into the system ? In this case i have duplicates of all device
> structures in the system. Do you expect this to be not a problem or if it is
> it's up to me to resolve i guess ?

You get another one. Same way you get duplicates if there's actually 2
devices plugged in, so if your driver supports multiple gpus already you
should be fine.

We should also not hang onto chardev minor numbers, so I think you should
be getting the same minor number again. But not 100% sure, maybe something
we might need to fix ...

> > A bunch of usb/spi drivers use this 100% correctly
> >    now, so there's examples. Plus kerneldoc explains stuff.
> 
> 
> Would you say tiny drm drivers are a good example ?

Yup. But also, they're tiny, so lots more complexity in amdgpu that they
don't even cover. But for the basic flow of using drmm and devm and the
functions I mentioned above, they should be the most bug-free drivers we
have.

> > - for a big driver like amdgpu doing this split up is going to be
> >    horrendously complex. I know, we've done it for i915, at least
> >    partially.
> 
> 
> Can you point me to relevant code/commits for i915 ?

Anything touching the i915 drm_driver.release function. Or any function
called from there.

> > I strongly recommend that you're using devm_ for managing hw
> >    related resources (iomap, irq, ...) as much as possible.
> 
> 
> From what i saw, in DRM devres implementation  amounts to using
> devm_drm_dev_init/devm_drm_dev_init_release - is that what you mean ? If so
> i see that devm_drm_dev_init_release just calls drm_dev_put, drm_dev_unplug
> ends up calling devm_drm_dev_init_release through the devres infrastructure
> - We already call drm_dev_unplug in amdgpu_pci_remove, we also directly call
> drm_dev_put there so i am not clear what's the added value of using devm
> here ?

There's a lot more to devres, but yes that's the drm_device one. For the
full list of what can all be managed with devres see

https://dri.freedesktop.org/docs/drm/driver-api/driver-model/devres.html

There's lots of example usage in drm, especially tiny drivers and anything
that runs on arm. Only caveat is that any usage of devm_kzalloc is buggy
(at least in drm, as far as I've checked them).

> >    For drm_device resources (mostly structures and everything related to
> >    that) we've just merged the drmm_ managed resources framework. There's
> >    some more work to be done there for various kms objects, but you can at
> >    least somewhat avoid tedious handrolling for everything internal
> >    already.
> 
> 
> I can't find drmm in the code, can you point me please ?

drm_managed.c, you need latest drm-next I think. Or linux-next. Docs here:

https://dri.freedesktop.org/docs/drm/gpu/drm-internals.html#managed-resources

> 
> > 
> >    Don't ever use devm_kzalloc and friends, I've looked at hundreds of uses
> >    of this in drm, they're all wrong.
> > 
> > - dma-buf is hilarious (and atm unfixed), dma-fence is even worse. In
> >    theory they're already refcounted and all and so should work, in
> >    practice I think we need to refcount the underlying drm_device with
> >    drm_dev_get/put to avoid the worst fall-out.
> > 
> > - One unfortunate thing with drm_dev_unplug is that the driver core is
> >    very opinionated and doesn't tell you whether it's a hotunplug or a
> >    driver unload. In the former case trying to shut down hw just wastes
> >    time (and might hit driver bugs), in the latter case driver engineers
> >    very much expect everything to be shut down.
> > 
> >    Right now you can only have one or the other, so this needs a module
> >    option hack or similar (default to the correct hotunplug behaviour for
> >    users).
> > 
> > - SIGBUS is better than crashing the kernel, but it's not even close for
> >    users. They still lose everything because everything crashes because in
> >    my experience, in practice, no one ever handles errors. There's a few
> >    more things on top:
> > 
> >    - sighandlers are global, which means only the app can use it. You can't
> >      use it in e.g. mesa. They're also not composable, so if you have on
> >      sighandler for gpu1 and a 2nd one for gpu2 (could be different vendor)
> >      it's all sadness. Hence "usersapce will handle SIGBUS" wont work.
> > 
> >    - worse, neither vk nor gl (to my knowledge) have a concept of events
> >      for when the gpu died. The only stuff you have is things like
> >      arb_robustness which says a) everything continues as if nothing
> >      happened b) there's a function where you can ask whether your gl
> >      context and all the textures/buffers are toast.
> > 
> >      I think that's about the only hotunplug application model we can
> >      realistically expect applications to support. That means _all_ errors
> >      need to be silently eaten by either mesa or the kernel. On i915 the
> >      model (also for terminally wedged gpu hangs) is that all ioctl keep
> >      working, mmaps keep working, and execbuf gives you an -EIO (which mesa
> >      eats silently iirc for arb_robustness).
> > 
> >    Conclusion is that SIGBUS is imo a no-go, and the only option we have is
> >    that a) mmaps fully keep working, doable for shmem or b) we put some
> >    fake memory in there (for vram or whatever), maybe even only a single
> >    page for all fake memory.
> > 
> > - you probably want arb_robustness and similar stuff in userspace as a
> >    first step.
> > 
> > tldr;
> > - refcounting, not waiting for userspace
> > - nothing can fail because userspace wont handle it
> 
> 
> For nothing can fail i see in tiny drm driver examples (e.g.
> ili9225_pipe_enable) that for any function which is about to do HW
> programming they check for drm_dev_enter and silently return if device is
> not present - is that what you mean, that I should pepper all of amdgpu code
> such that any function that ends up doing some HW programming be guarded
> with drm_dev_enter/exit silently returning in case of device is gone ?

Yup.

Note taht e.g. usb because it's a packet/stream bus gives you explicit
errors, so those drivers don't need the drm_dev_enter/exit. But for pci we
need them to make sure we're wasting time when the hw is gone on
everything first timing out.

> Thanks a lot for your detailed response.

np.

Cheers, Daniel

> 
> Andrey
> 
> 
> > 
> > That's at least my take on this mess, and what we've been pushing for over
> > the past few years. For kms-only drm_driver we should have achieved that
> > by now (plus/minus maybe some issues for dma-buf/fences, but kms-only
> > dma-buf/fences are simple enough that maybe we don't go boom yet).
> > 
> > For big gpus with rendering I think best next step would be to type up a
> > reasonable Gran Plan (into Documentation/gpu/todo.rst) with all the issues
> > and likely solutions. And then bikeshed that, since the above is just my
> > take on all this.
> > 
> > Cheers, Daniel
> > 
> > > Andrey Grodzovsky (6):
> > >    drm/ttm: Add unampping of the entire device address space
> > >    drm/amdgpu: Force unmap all user VMAs on device removal.
> > >    drm/amdgpu: Wait for all user clients
> > >    drm/amdgpu: Wait for all clients importing out dma-bufs.
> > >    drm/ttm: Add destroy flag in TTM BO eviction interface
> > >    drm/amdgpu: Use TTM MMs destroy interface
> > > 
> > >   drivers/gpu/drm/amd/amdgpu/amdgpu.h         |  3 ++
> > >   drivers/gpu/drm/amd/amdgpu/amdgpu_debugfs.c |  2 +-
> > >   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c  |  7 +++-
> > >   drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 27 ++++++++++++-
> > >   drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c     | 22 ++++++++--
> > >   drivers/gpu/drm/amd/amdgpu/amdgpu_job.c     |  9 +++++
> > >   drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c     |  4 ++
> > >   drivers/gpu/drm/amd/amdgpu/amdgpu_object.c  | 17 +++++++-
> > >   drivers/gpu/drm/amd/amdgpu/amdgpu_object.h  |  1 +
> > >   drivers/gpu/drm/nouveau/nouveau_drm.c       |  2 +-
> > >   drivers/gpu/drm/qxl/qxl_object.c            |  4 +-
> > >   drivers/gpu/drm/radeon/radeon_object.c      |  2 +-
> > >   drivers/gpu/drm/ttm/ttm_bo.c                | 63 +++++++++++++++++++++--------
> > >   drivers/gpu/drm/vmwgfx/vmwgfx_drv.c         |  6 +--
> > >   include/drm/ttm/ttm_bo_api.h                |  2 +-
> > >   include/drm/ttm/ttm_bo_driver.h             |  2 +
> > >   16 files changed, 139 insertions(+), 34 deletions(-)
> > > 
> > > -- 
> > > 2.7.4
> > > 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux