I recent just got a new server also from dell 2 weeks ago
went with more memory slower CPU, and smaller harddrives
have not run pgbench
Dell PE 2950 III
2 Quad Core 1.866 Ghz
16 gigs of ram.
8 hard drives 73Gig 10k RPM SAS
2 drives in Mirrored for OS, Binaries, and WAL
6 in a raid 10
Dual Gig Ethernet
OS Ubuntu 7.10
-----------------------------------------------
Version 1.03
------Sequential Output------ --Sequential Input-
--Random-
-Per Chr- --Block-- -Rewrite- -Per Chr-
--Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP
K/sec %CP /sec %CP
PriData 70000M 51030 90 107488 29 50666 10 38464
65 102931 9 268.2 0
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
PriData,70000M,51030,90,107488,29,50666,10,38464,65,102931,9,268.2,0,16,
+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++
the difference in our results are interesting.
What are the setting on the RAID card . I have the cache turned on
with Read Ahead
---- Message from Craig James <craig_james@xxxxxxxxxxxxxx>
<mailto:craig_james@xxxxxxxxxxxxxx> at 03-12-2008 09:55:18 PM ------
I just received a new server and thought benchmarks would be
interesting. I think this looks pretty good, but maybe there are
some suggestions about the configuration file. This is a web app,
a mix of read/write, where writes tend to be "insert into ...
(select ...)" where the resulting insert is on the order of 100 to
10K rows of two integers. An external process also uses a LOT of
CPU power along with each query.
Thanks,
Craig
Configuration:
Dell 2950
8 CPU (Intel 2GHz Xeon)
8 GB memory
Dell Perc 6i with battery-backed cache
RAID 10 of 8x 146GB SAS 10K 2.5" disks
Everything (OS, WAL and databases) are on the one RAID array.
Diffs from original configuration:
max_connections = 1000
shared_buffers = 400MB
work_mem = 256MB
max_fsm_pages = 1000000
max_fsm_relations = 5000
wal_buffers = 256kB
effective_cache_size = 4GB
Bonnie output (slightly reformatted)
------------------------------------------------------------------------------
Delete files in random order...done.
Version 1.03
------Sequential Output------ --Sequential Input-
--Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec
%CP /sec %CP
16G 64205 99 234252 38 112924 26 65275 98 293852
24 940.3 1
------Sequential Create------ --------Random
Create--------
-Create-- --Read--- -Delete-- -Create-- --Read---
-Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
/sec %CP
16 12203 95 +++++ +++ 19469 94 12297 95 +++++ +++
15578 82
www.xxx.com,16G,64205,99,234252,38,112924,26,65275,98,293852,24,940.3,1,16,12203,95,+++++,+++,19469,94,12297,95,+++++,+++,15578,82
------------------------------------------------------------------------------
$ pgbench -c 10 -t 10000 -v test -U test
starting vacuum...end.
starting vacuum accounts...end.
transaction type: TPC-B (sort of)
scaling factor: 1
number of clients: 10
number of transactions per client: 10000
number of transactions actually processed: 100000/100000
tps = 2786.377933 (including connections establishing)
tps = 2787.888209 (excluding connections establishing)
--
Sent via pgsql-performance mailing list
(pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance