Re: Any performance gains from using per thread(thread local) urings?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



For what it’s worth, I am (also) using using multiple “reactor” (i.e event driven) cores, each associated with one OS thread, and each reactor core manages its own io_uring context/queues.

Even if scheduling all SQEs through a single io_uring SQ — by e.g collecting all such SQEs in every OS thread and then somehow “moving” them to the one OS thread that manages the SQ so that it can enqueue them all -- is very cheap, you ‘d still need to drain the CQ from that thread and presumably process those CQEs in a single OS thread, which will definitely be more work than having each reactor/OS thread dequeue CQEs for SQEs that itself submitted. 
You could have a single OS thread just for I/O and all other threads could do something else but you’d presumably need to serialize access/share state between them and the one OS thread for I/O which maybe a scalability bottleneck.

( if you are curious, you can read about it here https://medium.com/@markpapadakis/building-high-performance-services-in-2020-e2dea272f6f6 )

If you experiment with the various possible designs though, I’d love it if you were to share your findings.

—
@markpapapdakis


> On 13 May 2020, at 2:01 PM, Dmitry Sychov <dmitry.sychov@xxxxxxxxx> wrote:
> 
> Hi Hielke,
> 
>> If you want max performance, what you generally will see in non-blocking servers is one event loop per core/thread.
>> This means one ring per core/thread. Of course there is no simple answer to this.
>> See how thread-based servers work vs non-blocking servers. E.g. Apache vs Nginx or Tomcat vs Netty.
> 
> I think a lot depends on the internal uring implementation. To what
> degree the kernel is able to handle multiple urings independently,
> without much congestion points(like updates of the same memory
> locations from multiple threads), thus taking advantage of one ring
> per CPU core.
> 
> For example, if the tasks from multiple rings are later combined into
> single input kernel queue (effectively forming a congestion point) I
> see
> no reason to use exclusive ring per core in user space.
> 
> [BTW in Windows IOCP is always one input+output queue for all(active) threads].
> 
> Also we could pop out multiple completion events from a single CQ at
> once to spread the handling to cores-bound threads .
> 
> I thought about one uring per core at first, but now I'am not sure -
> maybe the kernel devs have something to add to the discussion?
> 
> P.S. uring is the main reason I'am switching from windows to linux dev
> for client-sever app so I want to extract the max performance possible
> out of this new exciting uring stuff. :)
> 
> Thanks, Dmitry





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux