On Sat, Mar 20, 2010 at 4:47 AM, Deepa Thulasidasan <deepatulsidasan@xxxxxxxxxxx> wrote: > transaction table to grow by 10 times in near future. In this regard, we would like to know if this same structure of the transaction table and the indexing would be sufficient for quick retrivel of data or do we have to partition this table? If so what kind of partition would be suitable? My experience has been that when the tables are approaching the 100 million record mark things tend to slow down. Running reindex and vacuum on those tables also takes much longer since you tend not to have enough memory to do those operations efficiently. I like to partition tables such that they end up having under 10 million records each. I just (a few hours ago...) finished partitioning and migrating the data from a single table that had about 120 million records into 100 partitions of about 1.2 million rows each. For this particular case, I just partitioned on a mod 100 operation of one of the ID keys on which I do the bulk of my searches. Like the two Scott M's recommended, figure out your usage patterns and partition across those vectors to optimize those searches. I would not worry about optimizing the insert pattern. You really *never* delete this data? I would suspect then that having a partitioning scheme where the number of partitions can grow over time is going to be important to you. -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general