Re: New server to improve performance on our large and busy DB - advice?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Carlo Stonebanks wrote:
1) Which RAID level would you recommend

It looks like you stepped over a critical step, which is "will the server have a good performing RAID card?". Your whole upgrade could underperform if you make a bad mistake on that part. It's really important to nail that down, and to benchmark to prove you got what you expected from your hardware vendor.

3) If we were to port to a *NIX flavour, which would you recommend? (which support trouble-free PG builds/makes please!)

The only platform I consider close to trouble free as far as the PG builds working without issues are RHEL/CentOS, due to the maturity of the PGDG yum repository and how up to date it's kept. Every time I wander onto another platform I find the lag and care taken in packaging PostgreSQL to be at least a small step down from there.

4) Is this the right PG version for our needs?

8.4 removes the FSM, which takes away a common source for unexpected performance issues when you overflow max_fsm_pages one day. If you're going to deploy 8.3, you need to be more careful to monitor the whole VACUUM process; it's easier to ignore in 8.4 and still get by OK. As far as general code stability goes, I think it's a wash at this point. You might discover a bug in 8.4 that causes a regression, but I think you're just as likely to run into a situation that 8.3 handles badly that's improved in 8.4. Hard to say which will work out better in a really general way.

. We believe our requirements are exceptional, and we would benefit immensely from setting up the PG planner to always favour index-oriented decisions - which seems to contradict everything that PG advice suggests as best practice.

Pretty much everyone thinks their requirements are exceptional. It's funny how infrequently that's actually true. The techniques that favor index-use aren't that unique: collect better stats, set basic parameters correctly, adjust random_page_cost, investigate plans that don't do what you want to figure out why. It's easy to say there's something special about your data rather than follow fundamentals here; I'd urge you to avoid doing that. The odds that the real issue is that you're feeding the optimizer bad data is more likely than most people think, which brings us to:

Current non-default conf settings are:

I don't see effective_cache_size listed there. If that's at the default, I wouldn't be surprised that you're seeing sequential scans instead of indexed ones far too often.


max_connections = 200
work_mem = 512MB

This is a frightening combination by the way.

--
Greg Smith    2ndQuadrant   Baltimore, MD
PostgreSQL Training, Services and Support
greg@xxxxxxxxxxxxxxx  www.2ndQuadrant.com


--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux