Re: Performance of pg_dump on PGSQL 8.0

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Out of curiosity, does anyone have any idea what the ratio of actual datasize to backup size is if I use the custom format with -Z 0 compression or the tar format?

Thanks.

On 6/14/06, Scott Marlowe <smarlowe@xxxxxxxxxxxxxxxxx> wrote:
On Wed, 2006-06-14 at 09:47, John E. Vincent wrote:
> -- this is the third time I've tried sending this and I never saw it get
> through to the list. Sorry if multiple copies show up.
>
> Hi all,

BUNCHES SNIPPED

> work_mem = 1048576 ( I know this is high but you should see some of our
> sorts and aggregates)

Ummm.  That's REALLY high.  You might want to consider lowering the
global value here, and then crank it up on a case by case basis, like
during nighttime report generation.  Just one or two queries could
theoretically run your machine out of memory right now.  Just put a "set
work_mem=1000000" in your script before the big query runs.

> We're inserting around 3mil rows a night if you count staging, info, dim
> and fact tables. The vacuum issue is a whole other problem but right now
> I'm concerned about just the backup on the current hardware.
>
> I've got some space to burn so I could go to an uncompressed backup and
> compress it later during the day.

That's exactly what we do.  We just do a normal backup, and have a
script that gzips anything in the backup directory that doesn't end in
.gz...  If you've got space to burn, as you say, then use it at least a
few days to see how it affects backup speeds.

Seeing as how you're CPU bound, most likely the problem is just the
compressed backup.



--
John E. Vincent
lusis.org@xxxxxxxxx

[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux