Hi, I have a situation to handle a log table which would accumulate a large amount of logs. This table only involves insert and query operations. To limit the table size, I tried to split this table by date. However, the number of the logs is still large (46 million records per day). To further limit its size, I tried to split this log table by log type. However, this action does not improve the performance. It is much slower than the big table solution. I guess this is because I need to pay more cost on the auto-vacuum/analyze for all split tables. Can anyone comment on this situation? Thanks in advance. kuopo. -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance