Hengky Liwandouw <hengkyliwandouw@xxxxxxxxx> wrote: > On Nov 24, 2013, at 11:21 PM, Kevin Grittner wrote: >> Hengky Lie <hengkyliwandouw@xxxxxxxxx> wrote: >> >>> this query takes long time to process. It takes around 48 >>> seconds to calculate about 690 thousand record. >> >>> Is there any way to make calculation faster ? >> >> Quite possibly -- that's about 70 microseconds per row, and even >> fairly complex queries can often do better than that. > After reading the link you gave to me, changing shared_buffers to > 25% (512MB) of available RAM and effective_cache_size to 1500MB > (about 75% of available RAM) make the query runs very fast. > Postgres only need 1.8 second to display the result. That's 4.6 microseconds per row. Given the complexity of the query, it might be hard to improve on that. A simple tablescan that returns all rows generally takes 1 to 2 microseconds on the hardware I generally use. -- Kevin Grittner EDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general