On Sat, Oct 10, 2009 at 8:44 AM, Chris Kratz <chris.kratz@xxxxxxxxxxxxxx> wrote: >> >> alter function pg_table_is_visible(oid) cost 10; >> >> (You'll need to do it as superuser --- if it makes things worse, just >> set the cost back to 1.) >> >> > Sometimes it does not match >> > valid tables at all, and sometimes regex matching fails too ('\dt >> > schema.*_*_*' intermittently flakes out if it returns a lot of matches). >> >> There are some arbitrary "LIMIT 1000" clauses in those queries, which >> probably explains this ... but taking them out would likely cause >> libreadline to get indigestion ... >> >> regards, tom lane > > > We ran into this exact situation with a pg 8.3 database and a very large > number of tables. psql would wait for 20 to 30 seconds if the user was > unlucky enough to hit the tab key. After doing some research with query > logging, explain analyze and some trial and error, we came to the same > conclusion. Altering the cost for the pg_table_is_visible function to 10 > fixed our performance problem immediately. It appears that when the cost > was set to 1, that the query optimizer first ran the function over the > entire pg_class table. By increasing the cost, it now only runs the > function over the rows returned by the other items in the where clause. We have a large number of objects in our db and this worked for me too! Thanks a lot. As a side note, it also makes slony create set stuff run really really slow as well, and I'm guessing there's a similar trick for the slony functions I can add and see if it helps. -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance