Re: [patch 13/15] epoll: check ep_events_available() upon timeout

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Nov 2, 2020 at 12:08 PM Linus Torvalds
<torvalds@xxxxxxxxxxxxxxxxxxxx> wrote:
>
> On Sun, Nov 1, 2020 at 5:08 PM Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> wrote:
> >
> > After abc610e01c66 ("fs/epoll: avoid barrier after an epoll_wait(2)
> > timeout"), we break out of the ep_poll loop upon timeout, without checking
> > whether there is any new events available.  Prior to that patch-series we
> > always called ep_events_available() after exiting the loop.
>
> This patch looks overly complicated to me.
>
> It does the exact same thing as the "break" does, except:
>
>  - it does it the non-optimized way without the "avoid spinlock"
>
>  - it sets eavail if there was
>
> It would seem like the *much* simpler patch is to just do this oneliner instead:
>
>     diff --git a/fs/eventpoll.c b/fs/eventpoll.c
>     index 4df61129566d..29fa770ce1e3 100644
>     --- a/fs/eventpoll.c
>     +++ b/fs/eventpoll.c
>     @@ -1907,6 +1907,7 @@ static int ep_poll(struct eventpoll *ep,
> struct epoll_event __user *events,
>
>                 if (!schedule_hrtimeout_range(to, slack, HRTIMER_MODE_ABS)) {
>                         timed_out = 1;
>     +                   eavail = 1;
>                         break;
>                 }
>
> and *boom* you're done. That will mean that after a timeout we'll try
> one more time to just do that ep_events_available() thing.

Thank you for the suggestion. That was the first version I tried, and
I can confirm it fixes the race because we call ep_send_events() once
more before returning.  Though, I believe, due to time_out=1, we won't
goto fetch_events to call ep_events_available():

if (!res && eavail &&
   !(res = ep_send_events(ep, events, maxevents)) && !timed_out)
 goto fetch_events;

> I can see no downside to just setting eavail unconditionally and
> keeping the code much simpler. Hmm?

You're spot on that the patch is more complicated than your
suggestion.  However, the downside I observed was a performance
regression for the non-racy case: Suppose there are a few threads with
a similar non-zero timeout and no ready event. They will all
experience a noticeable contention in ep_scan_ready_list, by
unconditionally calling ep_send_events(). The contention was large
because there will be 2 write locks on ep->lock and one mutex lock on
ep->mtx with a large critical section.

FWIW, I also tried the following idea to eliminate that contention but
I couldn't prove that it's correct, because of the gap between calling
ep_events_available and removing this thread from the wait queue.
That was why I resorted to calling __remove_wait_queue() under the
lock.

     diff --git a/fs/eventpoll.c b/fs/eventpoll.c
     index 4df61129566d..29fa770ce1e3 100644
     --- a/fs/eventpoll.c
     +++ b/fs/eventpoll.c
     @@ -1907,6 +1907,7 @@ static int ep_poll(struct eventpoll *ep,
 struct epoll_event __user *events,

                 if (!schedule_hrtimeout_range(to, slack, HRTIMER_MODE_ABS)) {
                         timed_out = 1;
     +                   eavail = ep_events_available(ep);
                         break;
                 }

Also, please note that the non-optimized way without the "avoid
spinlock" is not problematic in any of our benchmarks because, when
the thread times out, it's likely on the wait queue except for this
particular racy scenario.

Thanks!
Soheil


>              Linus



[Index of Archives]     [Kernel Archive]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]

  Powered by Linux