On Fri, Nov 8, 2013 at 5:09 AM, Victor Hooi <victorhooi@xxxxxxxxx> wrote: > They think that it might be limited by the network, and how fast the > PostgreSQL server can push the data across the internet. (The Postgres > server and the box running the query are connected over the internet). You previously said you had 600Mb. Over the internet. ¿ Is it a very fat pipe ? Because otherwise the limitng factor is probably not the speed at which postgres can push the resuts, but he throughput of your link. If, as you stated, you need a single transaction to get a 600Mb snapshot I would recommend to dump it to disk, compressing on the fly ( you should get easily four o five fold reduction on a CSV file using any decent compressor ), and then send the file. If you do not have disk for the dump but can run programs near the server, you can try compressing on the fly. If you have got none of this but have got space for a spare table, use a select into, paginate this output and drop it after. Or just look at the configs and set longer query times, if your app NEEDS two hour queries, they can be enabled. But anyway, doing a long transaction over the internet does not seem like a good idea to me. Francisco Olarte -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general