I have been using PostgreSQL (currently 7.4.7) for several years now and am very happy with it but I currently run a website that has had a little bit of a boost and I am starting to see some performance problems (Not necessarily PostgreSQL).
The original website server ran on a single machine Dual 1.4 Dell 1650. I moved the database off this machine and onto a Dual Opteron 248 with two SATA hard disks mirrored using software raid. The Apache server remains on the small machine along with a Squid Proxy. I also started to agressively cache most requests to the database and have taken the requests hitting the database down by about %65 using Squid and memcached. I am looking to take this to about %80 over the next few weeks. The problem is that the database has increased in size by over 100% over the same period and looks likely to increase further.
The database has been allocated 2Gb worth of shared buffers and I have tweaked most of the settings in the config recently to see if I could increase the performance any more and have seen very little performance gain for the various types of queries that I am running.
It would appear that the only alternative may be a new machine that has a better disk subsystem or a large disk array then bung more RAM in the Opteron machine (max 16Gb 4Gb fitted) or purchase another machine with built in U320 SCSI ie an HP Proliant DL380 or Dell 2850.
Some indication of current performance is as follows. I know these statements are hardly indicative of a full running application and everything that goes with it but I would be very interested in hearing if anyone has a similar setup and is able to squeeze a lot more out of PostgreSQL. From what I can see here the numbers look OK for the hardware I am running on and that its not PostgreSQL that is the problem.
Inserting 1 million rows into the following table.These are raw insert statements.
Column | Type | Modifiers
--------+------------------------+-----------
id |
integer
|
data | character varying(100) |
where "data" has an average of 95 characters.
23mins 12 seconds.
Wrapping this in a transaction:
1min 47 seconds.
Select from the following table.
Table "public.test"
Column | Type | Modifiers
text | character varying(50) | not null
id | integer | not null
num | integer | default 0
Indexes:
"test_pkey" primary key, btree (text, id)
"test_id_idx" btree (id)
"test_text_idx" btree (text)
select count(*) from test;
count
----------
14289420
(1 row)
# select * from test where text = 'uk' ;
Time: 1680.607 ms
Get it into RAM hence the slight delay here. This delay has a serious impact on the user waiting in the web application.
# select * from test where text = 'uk' ;
Time: 477.739 ms
After it is in RAM.
select count(*) from test where text = 'uk' ;
count
--------
121058
(1 row)
The website has a fairly high volume of inserts and deletes which also means that I use pg_autovacum to keep things reasonably clean. However, I find that every couple of weeks performance degrades so much that I need to do a vacuum full which can take a long time and cripples the database. I have read in the docs that you should only need to vacuum full rarely but I am finding in practice this is not the case which might suggest that I have something set wrong in my config file.
max_fsm_pages = 500000 # I am thinking this might be a bit low.
max_fsm_relations = 1000
Any pointers to better hardware or recommendations on settings gladly recieved.
Regards,
Harry Jackson.
data | character varying(100) |
where "data" has an average of 95 characters.
23mins 12 seconds.
Wrapping this in a transaction:
1min 47 seconds.
Select from the following table.
Table "public.test"
Column | Type | Modifiers
text | character varying(50) | not null
id | integer | not null
num | integer | default 0
Indexes:
"test_pkey" primary key, btree (text, id)
"test_id_idx" btree (id)
"test_text_idx" btree (text)
select count(*) from test;
count
----------
14289420
(1 row)
# select * from test where text = 'uk' ;
Time: 1680.607 ms
Get it into RAM hence the slight delay here. This delay has a serious impact on the user waiting in the web application.
# select * from test where text = 'uk' ;
Time: 477.739 ms
After it is in RAM.
select count(*) from test where text = 'uk' ;
count
--------
121058
(1 row)
The website has a fairly high volume of inserts and deletes which also means that I use pg_autovacum to keep things reasonably clean. However, I find that every couple of weeks performance degrades so much that I need to do a vacuum full which can take a long time and cripples the database. I have read in the docs that you should only need to vacuum full rarely but I am finding in practice this is not the case which might suggest that I have something set wrong in my config file.
max_fsm_pages = 500000 # I am thinking this might be a bit low.
max_fsm_relations = 1000
Any pointers to better hardware or recommendations on settings gladly recieved.
Regards,
Harry Jackson.
--
http://www.hjackson.org
http://www.uklug.co.uk