śr., 27 lip 2022 o 08:08 hubert depesz lubaczewski <depesz@xxxxxxxxxx> napisał(a):
On Tue, Jul 26, 2022 at 10:48:47AM -0700, Adrian Klaver wrote:
> On 7/26/22 9:29 AM, Ron wrote:
> > On 7/26/22 10:22, Adrian Klaver wrote:
> > > On 7/26/22 08:15, Rama Krishnan wrote:
> > > > Hi Adrian
> > > >
> > > >
>
> > > > What is size of table?
> > > >
> > > > I m having two Database example
> > > >
> > > > 01. Cricket 320G
> > > > 02.badminton 250G
> > >
> > > So you are talking about an entire database not a single table, correct?
> >
> > In a private email, he said that this is what he's trying:
> > Pg_dump -h endpoint -U postgres Fd - d cricket | aws cp -
> > s3://dump/cricket.dump
> >
> > It failed for obvious reasons.
> From what I gather it did not fail, it just took a long time. Not sure
> adding -j to the above will improve things, pretty sure the choke point is
> still going to be aws cp.
It's really hard to say what is happening, because the command, as shown
wouldn't even work.
Starting from Pg_dump vs. pg_dump, space between `-` and `d`, "Fd" as
argument, or even the idea that you *can* make -Fd dumps to stdout and
pass it to aws cp.
depesz
I believe it's worth to look at this project: https://github.com/dimitri/pgcopydb since it is trying to solve exactly this problem