Hello, On Wed, Jul 06, 2022 at 06:18:28AM +1000, Imran Khan wrote: > In this case, the point of using llist would be to avoid taking the locks in > consumer. Given that the consumer can dispatch the whole list, I doubt that's worth the complication. > Hmm. My idea was that eventually we will never run into situation where multiple > producers will end up adding the same node because as soon as first producer > adds the node (the other potential adders are spinning on kernfs_notify_lock), > kn->attr.notif_next.next will get a non-NULL value and checking > (kn->attr.notify_next.next != NULL) will avoid the node getting re-added. So, here, I don't see how llist can be used without a surrounding lock and I don't see much point in using llist if we need to use a lock anyway. If this needs to be made scalable, we need a different strategy (e.g. per-cpu lock / pending list can be an option). I'm a bit swamped with other stuff and will likely be less engaged from now on. I'll try to review patches where possible. Thanks. -- tejun