Re: how to handle a big table for data log

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Large tables, by themselves, are not necessarily a problem. The problem is what you might be trying to do with them. Depending on the operations you are trying to do, partitioning the table might help performance or make it worse.
 
What kind of queries are you running? How many days of history are you keeping? Could you post an explain analyze output of a query that is being problematic?
Given the amount of data you hint about, your server configuration, and custom statistic targets for the big tables in question would be useful.

>>> kuopo <spkuo@xxxxxxxxxxxxxx> 7/19/2010 1:27 AM >>>
Hi,

I have a situation to handle a log table which would accumulate a
large amount of logs. This table only involves insert and query
operations. To limit the table size, I tried to split this table by
date. However, the number of the logs is still large (46 million
records per day). To further limit its size, I tried to split this log
table by log type. However, this action does not improve the
performance. It is much slower than the big table solution. I guess
this is because I need to pay more cost on the auto-vacuum/analyze for
all split tables.

Can anyone comment on this situation? Thanks in advance.


kuopo.

--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux