Re: New server to improve performance on our large and busy DB - advice?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jan 14, 2010 at 12:17 PM, Carlo Stonebanks
<stonec.register@xxxxxxxxxxxx> wrote:
> My client just informed me that new hardware is available for our DB server.
>
> . Intel Core 2 Quads Quad
> . 48 GB RAM
> . 4 Disk RAID drive (RAID level TBD)
>
> I have put the ugly details of what we do with our DB below, as well as the
> postgres.conf settings. But, to summarize: we have a PostgreSQL 8.3.6 DB
> with very large tables and the server is always busy serving a constant
> stream of single-row UPDATEs and INSERTs from parallel automated processes.
>
> There are less than 10 users, as the server is devoted to the KB production
> system.
>
> My questions:
>
> 1) Which RAID level would you recommend

RAID-10 with a battery backed hardware caching controller.

> 2) Which Windows OS would you recommend? (currently 2008 x64 Server)

That's probably the most stable choice out there for Windows.

> 3) If we were to port to a *NIX flavour, which would you recommend? (which
> support trouble-free PG builds/makes please!)

I'd parrot what Joshua Drake said here.  Centos / RHEL / Debian / Ubuntu

> 4) Is this the right PG version for our needs?

8.3 is very stable.  Update to the latest.  8.4 seems good, but I've
had, and still am having, problems with it crashing in production.
Not often, maybe once every couple of months, but just enough that I'm
not ready to try and use it there yet.  And I can't force the same
failure in testing, at least not yet.

> The details of our use:
>
> . These ETL processes typically run 10 at-a-time (i.e. in parallel)
> . We would like to run more, but the server appears to be the bottleneck
> . The ETL write processes are 99% single row UPDATEs or INSERTs.

Can you run the ETL processes in such a way that they can do many
inserts and updates at once?  That would certainly speed things up a
bit.

> . The size of the audit table makes it very difficult to manage, especially
> if we need to modify the design.

You might want to look into partitioning / inheritance if that would help.

> . Because we query the audit table different ways to audit the ETL processes
> decisions, almost every column in the audit table is indexed.

This may or may not help.  If you're querying it and the part in the
where clause referencing this column isn't very selective, and index
won't be chosen anyway.  If you've got multiple columns in your where
clause, the more selective ones will use and index and the rest will
get filtered out instead of using an index.  Look in
pg_stat_user_indexes for indexes that don't get used and drop them
unless, of course, they're unique indexes.

> . The maximum number of physical users is 10 and these users RARELY perform
> any kind of write
> . By contrast, the 10+ ETL processes are writing constantly

You may be well served by having two servers, one to write to, and a
slave that is used by the actual users.  Our slony slaves have a much
easier time writing out their data than our master database does.

> . We find that internal stats drift, for whatever reason, causing row seq
> scans instead of index scans.

Yeah, this is a known problem on heavily updated tables and recent
entries.  Cranking up autovacuum a bit can help, but often it requires
special treatment, either by adjusting the autovac analyze threshold
values for just those tables, or running manual analyzes every couple
of minutes.

> . So far, we have never seen a situation where a seq scan has improved
> performance, which I would attribute to the size of the tables

Not so much the size of the tables, as the size of the request.  If
you were running aggregates across whole large tables, a seq scan
would definitely be the way to go.  If you're asking for one row,
index scan should win.  Somewhere between those two, when you get up
to hitting some decent percentage of the rows, the switch from index
scan to seq scan makes sense, and it's likely happening too early for
you.  Look at random_page_cost and effective_cache_size for starters.

> . We believe our requirements are exceptional, and we would benefit
> immensely from setting up the PG planner to always favour index-oriented
> decisions - which seems to contradict everything that PG advice suggests as
> best practice.

See previous comment I made up there ^^^  It's not about always using
indexes, it's about giving the planner the information it needs to
make the right choice.

> Current non-default conf settings are:
>
> autovacuum = on
> autovacuum_analyze_scale_factor = 0.1

You might wanna lower the analyze scale factor if you're having
problems with bad query plans on fresh data.

> autovacuum_analyze_threshold = 250
> autovacuum_naptime = 1min
> autovacuum_vacuum_scale_factor = 0.2
> autovacuum_vacuum_threshold = 500
> bgwriter_lru_maxpages = 100
> checkpoint_segments = 64
> checkpoint_warning = 290
> datestyle = 'iso, mdy'
> default_text_search_config = 'pg_catalog.english'
> lc_messages = 'C'
> lc_monetary = 'C'
> lc_numeric = 'C'
> lc_time = 'C'
> log_destination = 'stderr'
> log_line_prefix = '%t '
> logging_collector = on
> maintenance_work_mem = 16MB
> max_connections = 200
> max_fsm_pages = 204800

The default tends to be low.  Run vacuum verbose to see if you're
overrunning the max_fsm_pages settings or the max_fsm_relations.

> max_locks_per_transaction = 128
> port = 5432
> shared_buffers = 500MB
> vacuum_cost_delay = 100

That's REALLY REALLY high.  You might want to look at something in the
5 to 20 range.

> work_mem = 512MB

-- 
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux