On 12.04.2007, at 15:58, Jason Lustig wrote:
Wow! That's a lot to respond to. Let me go through some of the ideas... First, I just turned on autovacuum, I forgot to do that. I'm not seeing a major impact though. Also, I know that it's not optimal for a dedicated server.
Hmm, why not? Have you recently vacuumed your db manually so it gets cleaned up? Even a vacuum full might be useful if the db is really bloated.
It's not just for postgres, it's also got our apache server on it. We're just getting started and didn't want to make the major investment right now in getting the most expensive server we can get
Hmmm, but more RAM would definitely make sense, especially in that szenaria. It really sounds like you machine is swapping to dead.
What does the system say about memory usage?
Some of the queries are definitely making an impact on the speed. We are constantly trying to improve performance, and part of that is reassessing our indexes and denormalizing data where it would help. We're also doing work with memcached to cache the results of some of the more expensive operations.
Hmmm, that kills you even more, as it uses RAM. I really don't think at the moment that it has something to do with PG itself, but with not enough memory for what you want to achieve.
What perhaps helps might be connection pooling, so that not so many processes are created for the requests. It depends on your "middle- ware" what you can do about that. pg_pool might be an option.
cug