Mark,
you could try gevel module to get structure of GIST index and look if
items distributed more or less homogenous (see different levels).
You can visualize index like http://www.sai.msu.su/~megera/wiki/Rtree_Index
Also, if your searches are neighbourhood searches, them you could try knn, available
in 9.1 development version.
Oleg
On Thu, 3 Feb 2011, Mark Stosberg wrote:
Each night we run over a 100,000 "saved searches" against PostgreSQL
9.0.x. These are all complex SELECTs using "cube" functions to perform a
geo-spatial search to help people find adoptable pets at shelters.
All of our machines in development in production have at least 2 cores
in them, and I'm wondering about the best way to maximally engage all
the processors.
Now we simply run the searches in serial. I realize PostgreSQL may be
taking advantage of the multiple cores some in this arrangement, but I'm
seeking advice about the possibility and methods for running the
searches in parallel.
One naive I approach I considered was to use parallel cron scripts. One
would run the "odd" searches and the other would run the "even"
searches. This would be easy to implement, but perhaps there is a better
way. To those who have covered this area already, what's the best way
to put multiple cores to use when running repeated SELECTs with PostgreSQL?
Thanks!
Mark
Regards,
Oleg
_____________________________________________________________
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: oleg@xxxxxxxxxx, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance