I've been searching for performance metrics and tweaks for a few weeks now. I'm trying to determine if the length of time to process my queries is accurate or not and I'm having a difficult time determining that. I know postgres performance is very dependent on hardware and settings and I understand how difficult it is to tackle. However, I was wondering if I could get some feedback based on my results please.
The database is running on a dual-core 2GHz Opteron processor with 8GB of RAM. The drives are 10K RPM 146GB drives in RAID 5 (I've read RAID 5 is bad for Postgres, but moving the database to another server didn't change performance at all). Some of the key parameters from postgresql.conf are:
max_connections = 100
shared_buffers = 16MB
work_mem = 64MB
everything else is set to the default
One of my tables has 660,000 records and doing a SELECT * from that table (without any joins or sorts) takes 72 seconds. Ordering the table based on 3 columns almost doubles that time to an average of 123 seconds. To me, those numbers are crazy slow and I don't understand why the queries are taking so long. The tables are UTF-8 encode and contain a mix of languages (English, Spanish, etc). I'm running the query from pgadmin3 on a remote host. The server has nothing else running on it except the database.
As a test I tried splitting up the data across a number of other tables. I ran 10 queries (to correspond with the 10 tables) with a UNION ALL to join the results together. This was even slower, taking an average of 103 seconds to complete the generic select all query.
I'm convinced something is wrong, I just can't pinpoint where it is. I can provide any other information necessary. If anyone has any suggestions it would be greatly appreciated.
The database is running on a dual-core 2GHz Opteron processor with 8GB of RAM. The drives are 10K RPM 146GB drives in RAID 5 (I've read RAID 5 is bad for Postgres, but moving the database to another server didn't change performance at all). Some of the key parameters from postgresql.conf are:
max_connections = 100
shared_buffers = 16MB
work_mem = 64MB
everything else is set to the default
One of my tables has 660,000 records and doing a SELECT * from that table (without any joins or sorts) takes 72 seconds. Ordering the table based on 3 columns almost doubles that time to an average of 123 seconds. To me, those numbers are crazy slow and I don't understand why the queries are taking so long. The tables are UTF-8 encode and contain a mix of languages (English, Spanish, etc). I'm running the query from pgadmin3 on a remote host. The server has nothing else running on it except the database.
As a test I tried splitting up the data across a number of other tables. I ran 10 queries (to correspond with the 10 tables) with a UNION ALL to join the results together. This was even slower, taking an average of 103 seconds to complete the generic select all query.
I'm convinced something is wrong, I just can't pinpoint where it is. I can provide any other information necessary. If anyone has any suggestions it would be greatly appreciated.