-----BEGIN PGP SIGNED MESSAGE----- Hash: RIPEMD160 > Take a really different approach. Log in CSV format to text files > instead, And only import the date ranges we need "on demand" if a report > is requested on the data. Seems like more work than a separate database to me. :) > 2. We could find a way to exclude the table for vacuuming, and let it > grow even larger. Putting the table in it's own database would > accomplish that, but it would nice to avoid the overhead of a second > database connection. Specific exclusions is generally what I've done for similar problems in the past. If you can live without the per-database summary at the end of the vacuum, you can do something like this: SET search_path = 'pg_catalog'; SELECT set_config('search_path', current_setting('search_path')||','||quote_ident(nspname),'false') FROM pg_namespace WHERE nspname <> 'pg_catalog' ORDER BY 1; \t \o pop SELECT 'vacuum verbose analyze '||quote_ident(relname)||';' FROM pg_class WHERE relkind = 'r' AND relname <> 'ginormous_table' ORDER BY 1; \o \i pop Or put any tables you don't want vacuumed by this script into their own schema: ... SELECT 'vacuum verbose analyze '||quote_ident(relname)||';' FROM pg_class c, pg_namespace n WHERE relkind = 'r' AND relnamespace = n.oid AND nspname = 'novac' ORDER BY 1; ... Just flip the equality operator, and you've got a way to vacuum just those excluded tables, for example once a week during a slow time. - -- Greg Sabino Mullane greg@xxxxxxxxxxxx End Point Corporation PGP Key: 0x14964AC8 200702211402 http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8 -----BEGIN PGP SIGNATURE----- iD8DBQFF3JeivJuQZxSWSsgRA7LZAKC7Sfz4XBTAfHuk1CpR+eBl7ixBIACeML8N 1W2sLLI4HMtdyV4EOoh2XkY= =eTUi -----END PGP SIGNATURE-----