(Its been a hour and I dont see my message on the
list so I'm sending it again. I've moved the queries and analyze out of the
email incase it was rejected because too long)
query: http://pastebin.ca/57218
In the pictures table all the ratings have a shared index
CREATE INDEX idx_rating ON pictures USING btree (rating_nudity, rating_violence, rating_sex, rating_racism, rating_spoilers, rating_yaoi, rating_yuri, rating_profanity); and approved and date_submitted and user_id also have their own btree indexes. In the picture_categories table pid and cat_id have their own btree indices plus one together. Full table definition: http://pastebin.ca/57219
the cat_id and rating values vary from query to query. The one listed above
took 54 seconds in a test run just now. Here is explain analyze: http://pastebin.ca/57220
Both pictures and picture categories have about 287,000
rows
This query needs to run in under about a second or it kills my site by clogging apache slots (apache maxes out at 256 and I can have several hundred people on my site at a time). How can I make it run faster? Server is a dual xeon with a gig of ram dedicated mostly to postgresql. Here is the changed lines in my postgresql.conf: http://pastebin.ca/57222 I know hyperthreading is considered something that can slow down a server
but with my very high concurancy (averages about 400-500 concurant users during
peak hours) I am hoping the extra virtual CPUs wil help. Anyone have experance
that says diferent at high concurancy?
|