I got that figured out, and the data is now going into my v9.5 cluster (shiny and new!).
I happen to hit 'Enter' on my terminal window after it was stagnant for ~1hr, and it gave me this error:
psql: fe_sendauth: no password supplied
I corrected that with pgpass and things are looking good.
Thanks.
jeff
jeff
On Fri, May 27, 2016 at 9:17 PM Jeff Baldwin <tarheeljeff@xxxxxxxxx> wrote:
Thanks Greg,Sounds like I've unknowingly stumbled onto a good path, the one you suggested.I actually installed v9.5 on the target server. I have it running on a different port (5444) and using a different data directory than the v8.3 install.I'm doing the dump, and forwarding it to the remote. It's been running for a while... but I'm actually not seeing anything show up on the target side. Does it dump locally first and then pipe over? Here is details of the happenings: http://pastebin.com/fEm3uJqypg_dump -v -C mls | psql -h db-blob04 -d mls -p 5444 -U postgresPerhaps I will kill this eventually and try the timings you suggest with just the data.Thoughts/comments are always welcome....On Fri, May 27, 2016 at 8:43 PM Greg Sabino Mullane <greg@xxxxxxxxxxxx> wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: RIPEMD160
> To move the DB, you are suggesting something like this:
> pg_dump -h dbms11 -U postgres -C mls11 | psql -h localhost -d mls11
Basically yes.
> I'm not familiar with removing/adding indexes (I'm not a DBA, just trying
> to pretend to be one for this project). Can you elaborate on what might I
> need to do there?
It basically means doing a DROP INDEX foobar; for each index on the new
database, copying the data over, and then doing CREATE INDEX CONCURRENTLY foobar ...
You mentioned that a pg_dump and psql restore takes longer than your 2 hour
window, but a lot of that time may simply be the index creations. You should
test out how long your biggest table takes by doing this:
* Copy the schema only to the new server:
pg_dump mls11 -h dbms11 --schema-only -C | psql
* Pick your largest table on the new server, and drop all indexes,
triggers, and constraints on it. Then time copying the data:
time pg_dump mls11 -h dbms11 --data-only -t foobar | psql mls11 -h newhost
This should give you a better indication of the bare minimum time needed
for that table. If you can find a newer version of pg_dump, you can do
all of the above a lot easier like so:
pg_dump mls11 -h dbms11 --section=pre-data -C | psql
time pg_dump mls11 -h dbms11 --section=data | psql mls11
This copies all the tables, and prevents the indexes and foreign keys from
being created. If that comes under your 2 hour window, you can at least have
a usable production database, and then start adding the indexed and foreign keys
back in. There are some further tricks one can do to speed up the transfer time,
but this will get you in the basic ballpark.
(It should be noted that Postgres 8.3 is extremely old and completely
unsupported. The inability to easily migrate to a new server is unlikely
to be your last problem because of this. You may even want to push for
a migration to 9.5 if you can, as that will also incur the same migration
timings as moving to a new 8.3 server, but at the end of the day you will
have a shiny 9.5 database.)
If that transfer is still over the 2 hour window, you will have to look into
a trigger based solution that can handle such an old version (which basically
means Slony or Bucardo). Even if it cannot copy all of the tables, it may be
able to do some of them, and then you can use pg_dump | psql for the rest.
- --
Greg Sabino Mullane greg@xxxxxxxxxxxx
End Point Corporation http://www.endpoint.com/
PGP Key: 0x14964AC8 201605272040
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-----BEGIN PGP SIGNATURE-----
iEYEAREDAAYFAldI6WkACgkQvJuQZxSWSsgkHACg2KjWStQF9qhIL6fNFhFB74Za
utAAoMa2WqCEfURl57g+hZc+LCEAnhT/
=WXCu
-----END PGP SIGNATURE-----
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general