On Wed, Jul 8, 2015 at 12:48 PM, Craig James <cjames@xxxxxxxxxxxxxx> wrote: > On Tue, Jul 7, 2015 at 10:31 PM, Joshua D. Drake <jd@xxxxxxxxxxxxxxxxx> > wrote: >> >> >> On 07/07/2015 08:05 PM, Craig James wrote: >>> >>> >>> >>> No ideas, but I ran into the same thing. I have a set of C/C++ functions >>> that put some chemistry calculations into Postgres as extensions (things >>> like, "calculate the molecular weight of this molecule"). As SQL >>> functions, the whole thing bogged down, and we never got the scalability >>> we needed. On our 8-CPU setup, we couldn't get more than 2 CPUs busy at >>> the same time, even with dozens of clients. >>> >>> When I moved these same functions into an Apache fast-CGI HTTP service >>> (exact same code, same network overhead), I could easily scale up and >>> use the full 100% of all eight CPUs. >>> >>> I have no idea why, and never investigated further. The convenience of >>> having the functions in SQL wasn't that important. >> >> >> I admit that I haven't read this whole thread but: >> >> Using Apache Fast-CGI, you are going to fork a process for each instance >> of the function being executed and that in turn will use all CPUs up to the >> max available resource. >> >> With PostgreSQL, that isn't going to happen unless you are running (at >> least) 8 functions across 8 connections. > > > Well, right, which is why I mentioned "even with dozens of clients." > Shouldn't that scale to at least all of the CPUs in use if the function is > CPU intensive (which it is)? only in the absence of inter-process locking and cache line bouncing. merlin -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance