Time for my pet meme to wiggle out of its hole (next to Phil's, and a day later). For PG to prosper in the future, it has to embrace the multi-core/processor/SSD machine at the query level. It has to. And it has to because the Big Boys already do so, to some extent, and they've realized that the BCNF schema on such machines is supremely efficient. PG/MySql/OSEngineOfChoice will get left behind simply because the efficiency offered will be worth the price. I know this is far from trivial, and my C skills are such that I can offer no help. These machines have been the obvious "current" machine in waiting for at least 5 years, and those applications which benefit from parallelism (servers of all kinds, in particular) will filter out the winners and losers based on exploiting this parallelism. Much as it pains me to say it, but the MicroSoft approach to software: write to the next generation processor and force users to upgrade, will be the winning strategy for database engines. There's just way too much to gain. -- Robert ---- Original message ---- >Date: Thu, 03 Feb 2011 09:44:03 -0600 >From: pgsql-performance-owner@xxxxxxxxxxxxxx (on behalf of Andy Colson <andy@xxxxxxxxxxxxxxx>) >Subject: Re: getting the most of out multi-core systems for repeated complex SELECT statements >To: Mark Stosberg <mark@xxxxxxxxxxxxxxx> >Cc: pgsql-performance@xxxxxxxxxxxxxx > >On 2/3/2011 9:08 AM, Mark Stosberg wrote: >> >> Each night we run over a 100,000 "saved searches" against PostgreSQL >> 9.0.x. These are all complex SELECTs using "cube" functions to perform a >> geo-spatial search to help people find adoptable pets at shelters. >> >> All of our machines in development in production have at least 2 cores >> in them, and I'm wondering about the best way to maximally engage all >> the processors. >> >> Now we simply run the searches in serial. I realize PostgreSQL may be >> taking advantage of the multiple cores some in this arrangement, but I'm >> seeking advice about the possibility and methods for running the >> searches in parallel. >> >> One naive I approach I considered was to use parallel cron scripts. One >> would run the "odd" searches and the other would run the "even" >> searches. This would be easy to implement, but perhaps there is a better >> way. To those who have covered this area already, what's the best way >> to put multiple cores to use when running repeated SELECTs with PostgreSQL? >> >> Thanks! >> >> Mark >> >> > >1) I'm assuming this is all server side processing. >2) One database connection will use one core. To use multiple cores you >need multiple database connections. >3) If your jobs are IO bound, then running multiple jobs may hurt >performance. > >Your naive approach is the best. Just spawn off two jobs (or three, or >whatever). I think its also the only method. (If there is another >method, I dont know what it would be) > >-Andy > >-- >Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) >To make changes to your subscription: >http://www.postgresql.org/mailpref/pgsql-performance -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance