Hi David,
I have been running few tests with 8.2.4 and here is what I have seen:
If fysnc=off is not an option (and it should not be an option :-) )
then commit_delay=10 setting seems to help a lot in my OLTP runs.
Granted it will delay your transactions a bit, but the gain is big
considering the WAL writes end up doing bigger writes under high load
and got a good boost in performance due to that change (IIRC it was
about 6-10% depending on load and contention). So that might help out.
Curiosly I did spend why it helps out on write contention. Atleast on
Solaris my observation is WAL logs then end up getting bigger than 8K
(Blocksize). This meant an overall reduction in IOPS on the filesystem
thats holding the logs and hence more IOPS capacity available to do do
more Log writes. (Using EAStress type of benchmark, it ended up doing
somewhere between 128K-256KB writes on the logs which was pretty
fascinating since the benchmark does drive fair amount of WAL writes and
without commit_delay, the disks were pretty saturated quickly.
Also if the load is high, then the delay in transaction is pretty much
non existent. (atleast what I observed with commit_delay=10 and
commit_siblings left to default)
Of course as already replied back, 8.3's async commit helps on top of
commit_delay so thats an option if few transactions loss potential is
acceptable.
-Jignesh
David Brain wrote:
Hi,
I'd appreciate some assistance in working through what would be the
optimal configuration for the following situation.
We currently have one large DB (~1.2TB on disk), that essentially
consists of 1 table with somewhere in the order of 500 million rows ,
this database has daily inserts as well as being used for some
semi-data mining type operations, so there are a fairly large number
of indices on the table. The hardware storing this DB (a software
RAID6) array seems to be very IO bound for writes and this is
restricting our insert performance to ~50TPS.
As we need to achieve significantly faster insert performance I have
been considering splitting the table into 'new' and 'old' data,
making inserts into the 'new' table (which will also help as there are
really 1 insert, an update and some selects involved in populating the
table), then moving the data over to the 'old' DB on a periodic
basis. There would be new hardware involved, I'm thinking of HW RAID
10 to improve the write performance.
The question really is, is it best to use two separate servers and
databases (and have to come up with some copy process to move data
from one to another), or to just add the faster storage hardware to
the existing server and create a new tablespace for the 'new data'
table on that hardware. Doing this would enable us to potentially
move data more easily from new to old (we can't use partitioning
because there is some logic involved in when things would need to be
moved to 'old'). Are there any global resources that make just adding
the faster storage to the existing box a bad idea (the wal_log springs
to mind - although that could be moved too), that would make adding an
additional server instead a better idea?
Also are there any settings that might boost our TPS on the existing
hardware (sync=off isn't an option.. (-: ). I have already
significantly increased the various buffers, but this was mainly to
improve select performance?
Verson of Postgresql is 8.2.3.
Thanks,
David.
---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings
---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match