Pretty much everyone thinks their requirements are exceptional. It's
funny how infrequently that's actually true. The techniques that favor
index-use aren't that unique: collect better stats, set basic parameters
correctly, adjust random_page_cost, investigate plans that don't do what
you want to figure out why. It's easy to say there's something special
about your data rather than follow fundamentals here; I'd urge you to
avoid doing that. The odds that the real issue is that you're feeding the
optimizer bad data is more likely than most people think, which brings us
to:
I understand that. And the answer is usually to go and do and ANALYZE
manually (if it isn't this, it will be some dependency on a set-returning
stored function we wrote before we could specify the rows and cost). My
question is really - why do I need this constant intervention? When we
rarely do aggregates, when our queries are (nearly) always single row
queries (and very rarely more than 50 rows) out of tables that have hundreds
of thousands to millions of rows, what does it take to NOT have to
intervene? WHich brings me to your next point:
I don't see effective_cache_size listed there. If that's at the default,
I wouldn't be surprised that you're seeing sequential scans instead of
indexed ones far too often.
Nice to know - I suspect someone has been messing around with stuff they
don't understand. I do know that after some screwing around they got the
server to the point that it wouldn't restart and tried to back out until it
would.
max_connections = 200
work_mem = 512MB
This is a frightening combination by the way.
Looks like it's connected to the above issue. The real max connection value
is 1/10th of that.
Thanks Greg!
Carlo
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance