"Mindaugas" <ml@xxxxxxxxxxx> writes: > I execute simple query "select * from bigtable where From='something'". > Query returns like 1000 rows and takes 5++ seconds to complete. As you pointed out that's not terribly slow for 1000 random accesses. It sounds like your drive has nearly 5ms seek time which is pretty common. What exactly is your goal? Do you need this query to respond in under a specific limit? What limit? Do you need to be able to execute many instances of this query in less than 5s * the number of executions? Or do you have more complex queries that you're really worried about? I do have an idea of how to improve Postgres for this case but it has to wait until we're done with 8.3 and the tree opens for 8.4. > Ideas for improvement? Greenplum or EnterpriseDB? Or I forgot something > from PostgreSQL features. Both Greenplum and EnterpriseDB have products in this space which let you break the query up over several servers but at least in EnterpriseDB's case it's targeted towards running complex queries which take longer than this to run. I doubt you would see much benefit for a 5s query after the overhead of sending parts of the query out to different machines and then reassembling the results. If your real concern is with more complex queries they may make sense though. It's also possible that paying someone to come look at your database will find other ways to speed it up. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training! ---------------------------(end of broadcast)--------------------------- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to majordomo@xxxxxxxxxxxxxx so that your message can get through to the mailing list cleanly