Re: Any performance gains from using per thread(thread local) urings?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> On 13 May 2020, at 4:15 PM, Dmitry Sychov <dmitry.sychov@xxxxxxxxx> wrote:
> 
> Hey Mark,
> 
> Or we could share one SQ and one CQ between multiple threads(bound by
> the max number of CPU cores) for direct read/write access using very
> light mutex to sync.
> 
> This also solves threads starvation issue  - thread A submits the job
> into shared SQ while thread B both collects and _processes_ the result
> from the shared CQ instead of waiting on his own unique CQ for next
> completion event.
> 


Well, if the SQ submitted by A and its matching CQ is consumed by B, and A will need access to that CQ because it is tightly coupled to state it owns exclusively(for example), or other reasons, then you’d still need to move that CQ from B back to A, or share it somehow, which seems expensive-is.

It depends on what kind of roles your threads have though; I am personally very much against sharing state between threads unless there a really good reason for it.






> On Wed, May 13, 2020 at 2:56 PM Mark Papadakis
> <markuspapadakis@xxxxxxxxxx> wrote:
>> 
>> For what it’s worth, I am (also) using using multiple “reactor” (i.e event driven) cores, each associated with one OS thread, and each reactor core manages its own io_uring context/queues.
>> 
>> Even if scheduling all SQEs through a single io_uring SQ — by e.g collecting all such SQEs in every OS thread and then somehow “moving” them to the one OS thread that manages the SQ so that it can enqueue them all -- is very cheap, you ‘d still need to drain the CQ from that thread and presumably process those CQEs in a single OS thread, which will definitely be more work than having each reactor/OS thread dequeue CQEs for SQEs that itself submitted.
>> You could have a single OS thread just for I/O and all other threads could do something else but you’d presumably need to serialize access/share state between them and the one OS thread for I/O which maybe a scalability bottleneck.
>> 
>> ( if you are curious, you can read about it here https://medium.com/@markpapadakis/building-high-performance-services-in-2020-e2dea272f6f6 )
>> 
>> If you experiment with the various possible designs though, I’d love it if you were to share your findings.
>> 
>> —
>> @markpapapdakis
>> 
>> 
>>> On 13 May 2020, at 2:01 PM, Dmitry Sychov <dmitry.sychov@xxxxxxxxx> wrote:
>>> 
>>> Hi Hielke,
>>> 
>>>> If you want max performance, what you generally will see in non-blocking servers is one event loop per core/thread.
>>>> This means one ring per core/thread. Of course there is no simple answer to this.
>>>> See how thread-based servers work vs non-blocking servers. E.g. Apache vs Nginx or Tomcat vs Netty.
>>> 
>>> I think a lot depends on the internal uring implementation. To what
>>> degree the kernel is able to handle multiple urings independently,
>>> without much congestion points(like updates of the same memory
>>> locations from multiple threads), thus taking advantage of one ring
>>> per CPU core.
>>> 
>>> For example, if the tasks from multiple rings are later combined into
>>> single input kernel queue (effectively forming a congestion point) I
>>> see
>>> no reason to use exclusive ring per core in user space.
>>> 
>>> [BTW in Windows IOCP is always one input+output queue for all(active) threads].
>>> 
>>> Also we could pop out multiple completion events from a single CQ at
>>> once to spread the handling to cores-bound threads .
>>> 
>>> I thought about one uring per core at first, but now I'am not sure -
>>> maybe the kernel devs have something to add to the discussion?
>>> 
>>> P.S. uring is the main reason I'am switching from windows to linux dev
>>> for client-sever app so I want to extract the max performance possible
>>> out of this new exciting uring stuff. :)
>>> 
>>> Thanks, Dmitry
>> 





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux