I have a very bit big database around 15 million in size, and the dump
file
is around 12 GB.
While importing this dump in to database I have noticed that initially
query
response time is very slow but it does improves with time.
Any suggestions to improve performance after dump in imported in to
database
will be highly appreciated!
This is pretty normal. When the db first starts up or right after a
load it has nothing in its buffers or the kernel cache. As you access
more and more data the db and OS learned what is most commonly
accessed and start holding onto those data and throw the less used
stuff away to make room for it. Our production dbs run at a load
factor of about 4 to 6, but when first started and put in the loop
they'll hit 25 or 30 and have slow queries for a minute or so.
Having a fast IO subsystem will help offset some of this, and
sometimes "select * from bigtable" might too.
Maybe it's the updating of the the hint bits ?...
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance