> [root@170226-db7 ~]# su -l postgres -c "ulimit -a" > core file size (blocks, -c) 0 > data seg size (kbytes, -d) unlimited > max nice (-e) 0 > file size (blocks, -f) unlimited > pending signals (-i) 139264 > max locked memory (kbytes, -l) 32 > max memory size (kbytes, -m) unlimited > open files (-n) 1024 > pipe size (512 bytes, -p) 8 > POSIX message queues (bytes, -q) 819200 > max rt priority (-r) 0 > stack size (kbytes, -s) 10240 > cpu time (seconds, -t) unlimited > max user processes (-u) 139264 > virtual memory (kbytes, -v) unlimited > file locks (-x) unlimited I just noticed something: the "open files" limit lists 1024, which is the default for this system. A quick count of open data files currently in use by Postgres returns almost 7000, though. [root@170226-db7 ~]# lsof -u postgres |egrep '(/pg_data|/pg_index|/pg_log)' |wc -l 6749 We have 100+ postgres processes running, so for an individual process, could the 1024 file limit be doing anything to this query? Or would I see an explicit error message regarding this condition? Regards, Matt -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general