Am 13.12.18 um 18:26 schrieb Daniel Vetter: >>> Code sharing just because the code looks similar is imo a really >>> bad idea, when the semantics are entirely different (that was also the >>> reason behind not reusing all the cpu event stuff for dma_fence, they're >>> not normal cpu events). >> Ok, the last sentence is what I don't understand. >> >> What exactly is the semantic difference between the dma_fence_wait and >> the wait_event interface? >> >> I mean the wait_event interface was introduced to prevent drivers from >> openly coding an event interface and getting it wrong all the time. >> >> So a good part of the bugs we have seen around waiting for dma-fences >> are exactly why wait_event was invented in the first place. >> >> The only big thing I can see missing in the wait_event interface is >> waiting for many events at the same time, but that should be a rather >> easy addition. > So this bikeshed was years ago, maybe I should type a patch to > document it, but as far as I remember the big difference is: > > - wait_event and friends generally Just Work. It can go wrong of > course, but the usual pattern is that the waker-side does and > uncoditional wake_up_all, and hence all the waiter needs to do is add > themselves to the waiter list. > > - dma_buf otoh is entirely different: We wanted to support all kinds > fo signalling modes, including having interrupts disabled by default > (not sure whether we actually achieve this still with all the cpu-side > scheduling the big drivers do). Which means the waker does not > unconditionally call wake_up_all, at least not timeline, and waiters > need to call dma_fence_enable_signalling before they can add > themselves to the waiter list and call schedule(). Well that is not something I'm questioning because we really need this behavior as well. But all of this can be perfectly implemented on top of wake_up_all. > The other bit difference is how you check for the classic wakeup races > where the event happens between when you checked for it and when you > go to sleep. Because hw is involved, the rules are again a bit > different, and their different between drivers because hw is > incoherent/broken in all kinds of ways. So there's also really tricky > things going on between adding the waiter to the waiter list and > dma_fence_enable_signalling. For pure cpu events you can ignore this > and bake the few necessary barriers into the various macros, dma_fence > needs more. Ah, yes I think I know what you mean with that and I also consider this a bad idea as well. Only very few drivers actually need this behavior and the ones who do should be perfectly able to implement this inside the driver code. The crux is that leaking this behavior into the dma-fence made it unnecessary complicated and result in quite a bunch of unnecessary irq_work and delayed_work usage. I will take a look at this over the holidays. Shouldn't be to hard to fix and actually has some value additional to being just a nice cleanup. Regards, Christian. > > Adding Maarten, maybe there was more. I definitely remember huge&very > long discussions about all this. > -Daniel _______________________________________________ amd-gfx mailing list amd-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/amd-gfx