Search Postgresql Archives

Re: Question about loading up a table

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi

So just to go over what i have


server A (this is the original pgsql server 9.2)

Server X and Server Y ... PGSQL 9.6 in a cluster - streaming replication with hot standby.


I have 2 tables about 2.5T of diskspace.

I want to get the date from A into X and X will replicate into Y.


I am currently on X using this command 

pg_dump -U <USER> -h <Server A > -t BIGTABLE -a <DB> | sudo -u postgres -i psql -q <DB>;

This is taking a long time, its been 2 days and I have xfered around 2T.. This is just a test to see how long and to populate my new UAT env. so I will have to do it again.

Problem is time.  the pg_dump process is single threaded.
I have 2 routers in between A and X but its 10G networking - but my network graphs don't show much traffic.

Server X is still in use, there are still records being inserted into the tables.

How can I make this faster.

I could shutdown server A and present the disks to server X, could I load this up in PGSQL and do a table to table copy - i presume this would be faster ... is this possible ?  how do I get around the same DB name ?
What other solutions do I have ?

Alex




On 1 August 2017 at 23:24, Scott Marlowe <scott.marlowe@xxxxxxxxx> wrote:
On Mon, Jul 31, 2017 at 11:16 PM, Alex Samad <alex@xxxxxxxxxxxx> wrote:
> Hi
>
> I double checked and there is data going over, thought I would correct that.
>
> But it seems to be very slow.   Having said that how do I / what tools do I
> use to check through put

Try the pg_current_xlog_location function on the slave?


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux