Search Postgresql Archives

Re: Tuning PostgreSQL for very large database

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/06/11 8:51 AM, René Fournier wrote:
Just wondering what I can do to squeeze out more performance of my database application? Here's my configuration:


    - Mac mini server

    - Core i7 quad-core at 2GHz

    - 16GB memory
    - Dedicated fast SSD (two SSDs in the server)
    - Mac OS X 10.7.2 (*not* using OS X Server)

    - PostgreSQL 9.05
    - PostGIS 1.5.3
    - Tiger Geocoder 2010 database (from build scripts from
    http://svn.osgeo.org/postgis/trunk/extras/tiger_geocoder/tiger_2010/)
    - Database size: ~90GB

I should say, this box does more than PostgreSQL geocoding/reverse-geocoding, so reasonably only half of the memory should be allotted to PostgreSQL.

Coming from MySQL, I would normally play with the my.cnf, using my-huge.cnf as a start. But I'm new to PostgreSQL and PostGIS (w/ a big database), so I was wondering if anyone had suggestions on tuning parameters (also, which files, etc.) Thanks!

postgresql.conf in the postgres 'data' directory is the only postgresql file you should have to touch. you -will- also need to increase the OSX "kernel.shmmax" and 'kernel.shmall' parameters (I'd set these to 4 gigabytes each, note that in most 'nix systems shmall is NOT in bytes), I can not help you do this as I only know how to do it on linux/solaris/aix...

in postgresql.org, given what you've said above, and assuming your application uses relatively few concurrent connections (say, no more than a few dozen), I'd try something like...

    shared_buffers = 1024mb
    maintenance_work_mem = 512MB
    work_mem = 128MB
    effective_cache_size = 4096MB

if you expect 100s of concurrent connections, reduce work_mem accordingly.

I'm assuming your database workload is read-mostly, and that you're not going to be doing a high rate of transactional operations with updates/inserts. if you /are/ getting into 100s/1000s of write transactions/second, then you'll want to watch your postgres logfiles and increase...

    checkpoint_segments = **

such that ** is large enough that you no longer get any checkpoints-too-frequent warnings. one heavy OLTP transaction server recently, I had to increase the default 3 to like 100 to get to a happy place. Increasing wal_buffers is probably a good idea too in these cases, but I'm suspecting this doesn't apply to you.


--
john r pierce                            N 37, W 122
santa cruz ca                         mid-left coast



--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux