Each night we run over a 100,000 "saved searches" against PostgreSQL 9.0.x. These are all complex SELECTs using "cube" functions to perform a geo-spatial search to help people find adoptable pets at shelters. All of our machines in development in production have at least 2 cores in them, and I'm wondering about the best way to maximally engage all the processors. Now we simply run the searches in serial. I realize PostgreSQL may be taking advantage of the multiple cores some in this arrangement, but I'm seeking advice about the possibility and methods for running the searches in parallel. One naive I approach I considered was to use parallel cron scripts. One would run the "odd" searches and the other would run the "even" searches. This would be easy to implement, but perhaps there is a better way. To those who have covered this area already, what's the best way to put multiple cores to use when running repeated SELECTs with PostgreSQL? Thanks! Mark -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance