Hi everyone, We have two postgresql 9.0 databases (32-bits) with more than 10,000 schemas. When we try to run ANALYZE in those databases we get errors like this (after a few hours): 2012-09-14 01:46:24 PDT ERROR: out of memory 2012-09-14 01:46:24 PDT DETAIL: Failed on request of size 421. 2012-09-14 01:46:24 PDT STATEMENT: analyze; (Note that we do have plenty of memory available for postgresql: shared_buffers=2048MB, work_mem=128MB, maintenance_work_mem=384MB, effective_cache_size = 3072MB, etc.) We have other similar databases with less than 10,000 schemas and ANALYZE works fine with them (they run on similar machines and configs). For now, we had to create shell scripts to run ANALYZE per schema, table by table. It works that way, so at least we have an alternative solution. But what exactly causes the out of memory? Is postgresql trying to run everything in a single transaction? Maybe this should be improved for the future releases. Please let me know what you guys think. Thanks in advance, Hugo -- View this message in context: http://postgresql.1045698.n5.nabble.com/Thousands-of-schemas-and-ANALYZE-goes-out-of-memory-tp5726198.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general