On 07/14/2012 08:17 PM, Robert Klemme wrote:
On Sat, Jul 14, 2012 at 11:50 AM, B Sreejith <bsreejithin@xxxxxxxxx> wrote:
Dear All,
Thanks alot for all the invaluable comments.
Additionally to Craig's excellent advice to measurements there's
something else you can do: with the knowledge of the queries your
application fires against the database you can evaluate your schema
and index definitions. While there is no guarantee that your
application will scale well if all indexes are present
Don't forget that sometimes it's better to DROP an index that isn't used
much, or that only helps occasional queries that aren't time-sensitive.
Every index has a cost to maintain - it slows down your inserts and
updates and it competes for disk cache with things that might be more
beneficial.
b) do not have indexes nor no indexes which
support the queries your application does against these tables which
will result in full table scans.
A full table scan is not inherently a bad thing, even for a huge table.
Sometimes you just need to examine every row, and the fastest way to do
that is without a doubt a full table scan.
Remember, a full table scan won't tend to push everything out of
shared_buffers, so it can also avoid competition for cache.
(If anyone ever wants concurrent scans badly enough to implement them,
full table scans with effective_io_concurrency > 1 will become a *lot*
faster for some types of query).
--
Craig Ringer
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance