"Mustafa Korkmaz" <Musti.de@xxxxxx> writes: > Iam using PostgreSQL 7.4.5 on a linux machine, and have the problem > that an easy "SELECT field1, field2, field3 FROM a_table"-statement > takes 10 minutes to give an answer. The table has 750.000 datarows. I > also made an VACUUM a_table before the statement, but it doesnt help at > all. The table has an index to a timestamp field.. > What could be the problem, or is it normal that postgresql cant handle > with so many data in an acceptable response-time? Well, an index is certainly not going to help if you don't have any selection criteria that would diminish the quantity of data involved. Ten minutes for 750K rows seems a long time. Mind you, if this is all taking place on a single host, and you haven't much memory, then it would make sense that: a) Collecting the rows in the backend session might start swapping, and then b) Collecting the rows in the return set on the client side would place that same amount of data in RAM again, further worsening swapping. You might run "top" while processing this; if it's chewing into swap space while data is transferred to the client, then the above scenario might be descriptive of why you're seeing terrible performance. The answer to this sort of problem tends to be to use a cursor, and have the client fetch smaller numbers of rows at a time. -- (format nil "~S@~S" "cbbrowne" "ntlug.org") http://www.ntlug.org/~cbbrowne/ "We believe Windows 95 is a walking antitrust violation" -- Bryan Sparks