Two things, 1. this depends on reading the stats file; that's done once per transaction. So if you run the query twice in a transaction, the second time will take less time. You can know how much time is spent reading that file by subtracting both times. 2. EXPLAIN (VERBOSE) will tell you which functions are being called by the query. One of those loops across all live backends. Is that significant? You could measure by creating an identical view but omitting pg_stat_db_numbackends. Does it take the same time as the view? If not, then you know that looping around all live backends is slow. If the problem is (1) then you could have less tables, so that the file is smaller and thus faster to read, but I don't think you'll like that answer; and if the problem is (2) then you could reduce max_connections, but I don't think you'll like that either. I suspect there's not much you can do, other than patch the monitoring system to not read that view as often. -- Álvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/ "Use it up, wear it out, make it do, or do without"