On Tue, May 23, 2017 at 2:14 PM, Jarek <jarek@xxxxxxxxxxxxx> wrote: > Dnia 2017-05-23, wto o godzinie 11:39 -0700, Steve Crawford pisze: >> The answer, as always, is "it depends." >> >> >> Can you give us an overview of your setup? The appropriate setup for >> small numbers of long-running analytical queries (typically faster >> CPUs) will be different than a setup for handling numerous >> simultaneous connections (typically more cores). > > I have pool of clients (~30) inserting to database about 50 records per > second (in total from all clients) and small numer (<10) clients > querying database for those records once per 10s. OK how many of those clients are typically hitting the db at the same time? If you see 3 or 4 clients at a time working with the rest idle that's a completely different load than if you've got 30 running near full throttle. I'd say build a simple synthetic workload that approximates your current work load and see what it does on a smaller machine first. If all 30 clients are keeping the db busy at once then definitely more cores. But also faster memory. A 32 core machine running ancient 800MHz memory is gonna get stomped by something with 16 cores running faster GHz while the memory is 2000MHz or higher. Always pay attention to memory speed, esp in GB/s etc. DB CPUs are basically mostly data pumps, moving data as fast as possible from place to place. Bigger internal piping means just as much as core speed and number of cores. Also if all 30 clients are working hard then see how it runs with a db pooler. You should be able to find the approximate knee of performance by synthetic testing and usually best throughput will be at somewhere around 1x to 2x # of cores. Depending on io and memory. > Other queries are rare and irregular. > The biggest table has ~ 100mln records (older records are purged > nightly). Database size is ~13GB. > I near future I'm expecting ~150 clients and 250 inserts per second and OK so yeah definitely look at connection pooling. You don't want to start out handling 150 backends on any server if you don't have to. Performance-wise a 14c machine fall off a cliff by 28 or so active connections. > more clients querying database. OK if you're gonna let users throw random sql at it, then you need connection pooling even more. Assuming writes have a priority then you'd want to limit reads to some number of cores etc to keep it out of your hair. > Server is handling also apache with simple web application written in > python. > For the same price, I can get 8C/3.2GHz or 14C/2.6GHz. Which one will be > better ? CPU names / models please. If intel look up on arc, look for memory bandwidth. > > > or so >> >> But CPU is often not the limiting factor. With a better understanding >> of your needs, people here can offer suggestions for memory, storage, >> pooling, network, etc. >> >> >> Cheers, >> Steve >> >> >> >> On Tue, May 23, 2017 at 11:29 AM, Jarek <jarek@xxxxxxxxxxxxx> wrote: >> Hello! >> >> I've heavy loaded PostgreSQL server, which I want to upgrade, >> so it will >> handle more traffic. Can I estimate what is better: more cores >> or >> higher frequency ? I expect that pg_stat should give some >> tips, but >> don't know where to start... >> >> best regards >> Jarek >> >> >> >> -- >> Sent via pgsql-performance mailing list >> (pgsql-performance@xxxxxxxxxxxxxx) >> To make changes to your subscription: >> http://www.postgresql.org/mailpref/pgsql-performance >> >> > > > > > -- > Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-performance -- To understand recursion, one must first understand recursion. -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance