> max_connections = 100 > shared_buffers = 16MB > work_mem = 64MB > everything else is set to the default OK, but what about effective_cache_size for example? Anyway, we need more information about the table itself - the number of rows is nice, but it does not say how large the table is. The rows might be small (say 100B each) or large (say several kilobytes), affecting the amount of data to be read. We need to know the structure of the table, and the output of the following commands: ANALYZE table; SELECT relpages, reltuples FROM pg_class WHERE relname = 'table'; EXPLAIN SELECT * FROM table; > > One of my tables has 660,000 records and doing a SELECT * from that table > (without any joins or sorts) takes 72 seconds. Ordering the table based on > 3 columns almost doubles that time to an average of 123 seconds. To me, > those numbers are crazy slow and I don't understand why the queries are > taking so long. The tables are UTF-8 encode and contain a mix of languages > (English, Spanish, etc). I'm running the query from pgadmin3 on a remote > host. The server has nothing else running on it except the database. > > As a test I tried splitting up the data across a number of other tables. I > ran 10 queries (to correspond with the 10 tables) with a UNION ALL to join > the results together. This was even slower, taking an average of 103 > seconds to complete the generic select all query. Well, splitting the tables just to read all of them won't help. It will make the problem even worse, due to the necessary processing (UNION ALL). regards Tomas -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance