As promised, I did a tiny benchmark - basically, 8 empty tables are
filled with 100k rows each within 8 transactions (somewhat typically for
my application). The test machine has 4 cores, 64G RAM and RAID1 10k
drives for data.
# INSERTs into a TEMPORARY table:
[joachim@testsrv scaling]$ time pb query -d scaling_qry_1.xml
real 3m18.242s
user 1m59.074s
sys 1m51.001s
# INSERTs into a standard table:
[joachim@testsrv scaling]$ time pb query -d scaling_qry_1.xml
real 3m35.090s
user 2m5.295s
sys 2m2.307s
Thus, there is a slight hit of about 10% (which may even be within
meausrement variations) - your milage will vary.
Usually WAL causes a much larger performance hit than this.
Since the following command :
CREATE TABLE tmp AS SELECT n FROM generate_series(1,1000000) AS n
which inserts 1M rows takes 1.6 seconds on my desktop, your 800k rows
INSERT taking more than 3 minutes is a bit suspicious unless :
- you got huge fields that need TOASTing ; in this case TOAST compression
will eat a lot of CPU and you're benchmarking TOAST, not the rest of the
system
- you got some non-indexed foreign key
- some other reason ?
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance