I have loaded data for a database conversion I am working on and I have created the indexes I believe I will need. My current design uses a schema for each client that will access the database. I am using schema because there is one set of global tables that I need to query with the tables within the different schema. I currently have 2000 active schema, but due to the overhead of creating the schema and associated tables I have created schema with empty databases for 8000 to cover the next several years of growth. Each schema has 130 tables. I am currently just in the testing phase and I am running this on a dual core xeon machine with 8GB of RAM and a 500GB SSD to hold the data. The current database size is 200GB. Running "analyze verbose;" and watching top, the system starts out using no swap data and about 4GB of cached memory and about 1GB of used memory. As it runs, the amount of used RAM climbs, and eventually the used swap memory increases to 100% and after being at that level for a couple of minutes, the analyze function crashes and indicates "server closed the connection unexpectedly." Also, as it progresses the scroll of tables being analyzed slows. Is it normal to have the used memory to continue to grow when performing analyze? I would think that since it is performing a table by table operation it would be releasing used memory when moving on to the next table. Is there anything I can do to get the analyze function to finish? -- View this message in context: http://postgresql.nabble.com/Would-like-to-know-how-analyze-works-technically-tp5844197.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general