Re: pg_dump performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



"Jared Mauch" <jared@xxxxxxxxxxxxxxx> writes:

> 	pg_dump is utilizing about 13% of the cpu and the
> corresponding postgres backend is at 100% cpu time.
> (multi-core, multi-cpu, lotsa ram, super-fast disk).
>...
> 	pg8.3(beta) with the following variances from default
>
> checkpoint_segments = 300        # in logfile segments, min 1, 16MB each
> effective_cache_size = 512MB    # typically 8KB each
> wal_buffers = 128MB                # min 4, 8KB each
> shared_buffers = 128MB            # min 16, at least max_connections*2, 8KB each
> work_mem = 512MB                 # min 64, size in KB

Fwiw those are pretty unusual numbers. Normally work_mem is much smaller than
shared_buffers since you only need one block of memory for shared buffers and
work_mem is for every query (and every sort within those queries). If you have
ten queries running two sorts each this setting of work_mem could consume 5GB.

Raising shared buffers could improve your pg_dump speed. If all the data is in
cache it would reduce the time spend moving data between filesystem cache and
postgres shared buffers.

What made you raise wal_buffers so high? I don't think it hurts but that's a
few orders of magnitude higher than what I would expect to help.

-- 
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com
  Ask me about EnterpriseDB's PostGIS support!

---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
       subscribe-nomail command to majordomo@xxxxxxxxxxxxxx so that your
       message can get through to the mailing list cleanly

[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux