On Mon, 2016-11-07 at 07:49 -0700, Rich Megginson wrote: > On 11/06/2016 04:07 PM, William Brown wrote: > > On Fri, 2016-11-04 at 12:07 +0100, Ludwig Krispenz wrote: > >> On 11/04/2016 06:51 AM, William Brown wrote: > >>> http://www.port389.org/docs/389ds/design/autotuning.html > >>> > >>> I would like to hear discussion on this topic. > >> thread number: > >> independent of number of cpus I would have a default minmum number of > >> threads, > > What do you think would be a good minimum? With too many threads to CPU, > > we can cause an overhead in context switching that is not efficient. > > Even if the threads are unused, or mostly idle? It's when they stop being idle that the contention becomes an issue. We aren't going to ship something like 64 threads on an 8 thread machine, that's just asking for trouble. Like I said, I think this is why it's important to do some testing, to workout what the right point is between enough threads to keep the cpu busy, but not so many that we degrade performance or increase latency. > > > > >> your test result for reduced thread number is with clients quickly > >> handling responses and short operations. > >> But if some threads are serving lazy clients or do database access and > >> have to wait, you can quickly run out of threads handling new ops > > Mmm this is true. Nunc-Stans helps a bit here, but not completely. > > In this case, where there are a lot of mostly idle clients that want to > maintain an open connection, nunc-stans helps a great deal, both because > epoll is much better than a giant poll() array, and because libevent > maintains a sorted idle connection list for you. Yep! Well, I need to do a bit more also for connection table replacement, but yes, this will be much better. -- Sincerely, William Brown Software Engineer Red Hat, Brisbane
Attachment:
signature.asc
Description: This is a digitally signed message part
_______________________________________________ 389-devel mailing list -- 389-devel@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to 389-devel-leave@xxxxxxxxxxxxxxxxxxxxxxx