On Thu, Jun 14, 2012 at 11:15 AM, Alex Lai <mlai@xxxxxxxxxx> wrote: > My host was freeze up after submitted the following query that prevented me > to ssh to the host. > I was unable to psql and submit pg_cancel_backend. The tables have over 20 > millions rows. > Does dblink uses too much resource from the host when join large tables. > Hope someone can give me suggestion. > > CREATE OR REPLACE VIEW missing_archiveset_in_mds_ops > (filename, esdt, archiveset) AS > select * from dblink('host=ops_host port=4001 user=omiops dbname=omiops', > 'select filename, esdt, archiveset from > filemeta_archiveset join filemeta_common using(fileid) > join file using(fileid)') as t1(filename text,esdt text,archiveset int) > where (filename, esdt, archiveset) not in ( > select filename, esdt, archiveset > from dblink('host=ops_host port=4002 user=omiops dbname=metamine', > 'select filename, esdt, archiveset from > file_archiveset join filemeta using(fileid) > join filename using(fileid)') as t2(filename text,esdt text,archiveset > int)); It would be interesting to know what exactly was the specific trigger that brought down the server since dblink should not be allowed to do that. I'm assuming out of memory since libpq (used on the dblink client side) is not memory bounded. 9.2 will include new row processing features that should drastically reduce dblink memory consumption and will probably prevent this from happening again. In the meantime, restructure both dblinks to gather the data into separate local tables (temporary if you can wing it), then create indexes in advance of the join. merlin -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general