On Wed, 25 Dec 2024 at 01:44, WangYuli <wangyuli@xxxxxxxxxxxxx> wrote: > > > +config PIPE_SKIP_SLEEPER > + bool "Skip sleeping processes during pipe read/write" If the optimization is correct, there is no point to having a config option. If the optimization is incorrect, there is no point to having the code. Either way, there's no way we'd ever have a config option for this. > +pipe_check_wq_has_sleeper(struct wait_queue_head *wq_head) > +{ > + return wq_has_sleeper(wq_head); So generally, the reason this is buggy is that it's racy due to either CPU memory ordering issues or simply because the sleeper is going to sleep but hasn't *quite* added itself to the wait queue. Which is why the wakeup path takes the wq head lock. Which is the only thing you are actually optimizing away. > + if (was_full && pipe_check_wq_has_sleeper(&pipe->wr_wait)) > wake_up_interruptible_sync_poll(&pipe->wr_wait, EPOLLOUT | EPOLLWRNORM); End result: you need to explain why the race cannot exist. And I think your patch is simply buggy because the race can in fact exist, because there is no other lock than the waitqueue lock that enforces memory ordering and protects against the "somebody is just about to sleep" race. That said, *if* all the wait queue work was done under the pipe mutex, we could use the pipe mutex for exclusion instead. That's actually how it kind of used to work long long ago (not really - but close: it depended on the BKL originally, and adding waiters to the wait-queues early before all the tests for full/empty, and avoiding the races that way) But now waiters use "wait_event_interruptible_exclusive()" explicitly outside the pipe mutex, so the waiters and wakers aren't actually serialized. And no, you are unlikely to see the races in practice (particularly the memory ordering ones). So you have to *think* about them, not test them. Linus