On Wed, Dec 4, 2019 at 6:01 PM Bartosz Golaszewski <brgl@xxxxxxxx> wrote: > > From: Bartosz Golaszewski <bgolaszewski@xxxxxxxxxxxx> > > The read_lock mutex is supposed to prevent collisions between reading > and writing to the line event kfifo but it's actually only taken when > the events are being read from it. > > Drop the mutex entirely and reuse the spinlock made available to us in > the waitqueue struct. Take the lock whenever the fifo is modified or > inspected. Drop the call to kfifo_to_user() and instead first extract > the new element from kfifo when the lock is taken and only then pass > it on to the user after the spinlock is released. > My comments below. > + spin_lock(&le->wait.lock); > if (!kfifo_is_empty(&le->events)) > events = EPOLLIN | EPOLLRDNORM; > + spin_unlock(&le->wait.lock); Sound like a candidate to have kfifo_is_empty_spinlocked(). > struct lineevent_state *le = filep->private_data; > - unsigned int copied; > + struct gpioevent_data event; > int ret; > + if (count < sizeof(event)) > return -EINVAL; This still has an issue with compatible syscalls. See patch I have sent recently. I dunno how you see is the better way: a) apply mine and rebase your series, or b) otherwise. I can do b) if you think it shouldn't be backported. Btw, either way we have a benifits for the following one (I see you drop kfifo_to_user() and add event variable on stack). > + return sizeof(event); Also see comments in my patch regarding the event handling. -- With Best Regards, Andy Shevchenko