Linux: more cores = less concurrency.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Guys,

I'm just doing some tests on a new server running one of our heavy select functions (the select part of a plpgsql function to allocate seats) concurrently.  We do use connection pooling and split out some selects to slony slaves, but the tests here are primeraly to test what an individual server is capable of.

The new server uses 4 x 8 core Xeon X7550 CPUs at 2Ghz, our current servers are 2 x 4 core Xeon E5320 CPUs at 2Ghz.

What I'm seeing is when the number of clients is greater than the number of cores, the new servers perform better on fewer cores.

Has anyone else seen this behaviour?  I'm guessing this is either a hardware limitation or something to do with linux process management / scheduling? Any idea what to look into?

My benchmark utility is just using a little .net/npgsql app that runs increacing numbers of clients concurrently, each client runs a specified number of iterations of any sql I specify.

I've posted some results and the test program here:

http://www.8kb.co.uk/server_benchmarks/


-- 
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance



[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux