Re: Resurrecting EPOLLROUNDROBIN

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Mar 25, 2019 at 4:38 AM Marek Majkowski <marek@xxxxxxxxxxxxxx> wrote:
>
> Hi,
>
> Recently we noticed epoll is not helpful for load balancing when
> called on a listen TCP socket. I described this in a blog post:
>
> https://blog.cloudflare.com/the-sad-state-of-linux-socket-balancing/
>
> The short explanation: new connections going to a listen socket are
> not evenly distributed across processes that wait on the EPOLLIN. In
> practice the last process doing epoll_wait() will get the new
> connection. See the trivial program to reproduce:
>
> https://github.com/cloudflare/cloudflare-blog/blob/master/2017-10-accept-balancing/epoll-and-accept.py
>
>    $ ./epoll-and-accept.py &
>    $ for i in `seq 6`; do echo | nc localhost 1024; done
>    worker 0
>    worker 0
>    worker 0
>    worker 0
>    worker 0
>    worker 0
>
> Worker #0 did all the accept() calls. This is because the listen
> socket wait queue is a LIFO (not FIFO!). With current behaviour, the
> process calling epoll_wait() most recently will be woken up first.
> This usually is the busiest process. This leads to uneven load
> distribution across worker processes.

I recall a discussion of this at a conference several years ago, but
it's been several years.  Anyway:

I read the blog post, and I looked at your example, and the kernel
behavior actually seems quite sane to me.  From the kernel's
perspective, if you're calling accept in a loop in a bunch of threads
(mediated by epoll or otherwise), and one of those threads is able to
call accept() fast enough, then that thread *should* get all the
sockets.  It's cache hot, and bouncing around is expensive.

Now obviously the overall behavior here is suboptimal, but that's
arguably because the user process is being silly, not because the
kernel is doing it wrong.  Shouldn't the user process take the newly
accepted socket and hand it off to an appropriate thread for
servicing?  If I were doing this, I'd get a freshly accepted socket
and either forward it to a thread (or process) that is appropriately
lightly loaded or, even better, that is pinned to the CPU that RFS has
assigned to the flow assuming that that thread isn't overloaded.  If
the program is using threads, then this doesn't need to involve the
kernel at all and, if it's using processes, then SCM_RIGHTS would do
the trick.  But asking the kernel to arbitrarily and awkwardly
round-robin the sockets and then keeping the flows on the threads that
get picked means that, at best, each thread gets an arbitrary
selection of flows and the balancing isn't particularly good.

Now, if someone were to actually try doing this in userspace and it
was too slow, I could see adding some kernel mechanisms to accelerate
the process.  Perhaps a mechanism to ask to accept only new
connections that are RFSified to the calling CPU would be useful.  But
this shouldn't be an *epoll* mechanism, since there is no actual
guarantee that the CPU that returns first from epoll_wait() is the
same CPU that calls accept() under load.  (Under load, multiple new
connections could come in and wake multiple CPUs before any of them
manage to call accept().)

So I think that EPOLLROUNDROBIN is not a great solution to the
problem, and I think that the problem isn't obviously a *kernel*
problem in the first place.



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux