Tory M Blue wrote: > I've got some beefy hardware but have some tables that are over 57GB raw and end up at 140GB size > after indexes are applied. One index creation took 7 hours today. So it's time to dive in and see > where i'm lacking and what I should be tweaking. > > I looked at pgtune again today and the numbers it's spitting out took me back, they are huge. From all > historical conversations and attempts a few of these larger numbers netted reduced performance vs > better performance (but that was on older versions of Postgres). > > So I come here today to seek out some type of affirmation that these numbers look good and I should > look at putting them into my config, staged and or in one fell swoop. > > I will start at the same time migrating my config to the latest 9.3 template... > > Postgres Version: 9.3.4, Slony 2.1.3 (migrating to 2.2). > CentOS 6.x, 2.6.32-431.5.1.el6.x86_64 > Big HP Boxen. > > 32 core, 256GB of Ram DB is roughly 175GB in size but many tables are hundreds of millions of rows. > > The pgtune configurations that were spit out based on the information above; > > max_connections = 300 That's a lot, but equals what you currently have. It is probably ok, but can have repercussions if used with large work_mem: Every backend can allocate that much memory, maybe even several times for a complicated query. > shared_buffers = 64GB That seems a bit on the large side. I would start with something like 4GB and run (realistic) performance tests, doubling the value each time. See where you come out best. You can use the pg_buffercache contrib to see how your shared buffers are used. > effective_cache_size = 192GB That should be all the memory in the machine that is available to PostgreSQL, so on an exclusive database machine it could be even higher. > work_mem = 223696kB That looks ok, but performance testing wouldn't harm. Ideally you log temporary file creation and have this parameter big enough so that normal queries don't need temp files, but low enough so that the file system cache still has some RAM left. > maintenance_work_mem = 2GB That's particularly helpful for your problem, index creation. > checkpoint_segments = 32 Check. You want checkpoints to be time triggered, so don't be afraid to go higher if you get warnings unless a very short restore time is of paramount importance. > checkpoint_completion_target = 0.7 Check. > wal_buffers = 16MB That's fine too, although with 9.3 you might as well leave it default. With that much RAM it will be autotuned to the maximum anyway. > default_statistics_target = 100 That's the default value. Increase only if you get bad plans because of insufficient statistics. Yours, Laurenz Albe -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance