On Thu, 2007-09-06 at 20:57 -0500, Michael Glaesemann wrote: > On Sep 6, 2007, at 20:46 , Ow Mun Heng wrote: > > I would believe performance would be better it being denormalised. (in > > this case) > > I assume you've arrived at the conclusion because you have > (a) shown > that the performance with a normalized schema does not meet your > needs; > (b) benchmarked the normalized schema under production > conditions; > (c) benchmarked the denormalized schema under production > conditions; and > (d) shown that performance is improved in the > denormalized case to arrive at that conclusion. I'm interested to see > the results of your comparisons. > Regardless, it sounds like you've already made up your mind. Why ask > for comments? You've assumed wrong. I've not arrived at any conclusion but merely exploring my options on which way would be the best to thread. I'm asking the list because I'm new in PG and after reading all those articles on highscalability etc.. majority of them are all using some kind of denormalised tables. Right now, there's 8 million rows of data in this one table, and growing at a rapid rate of ~2 million/week. I can significantly reduce this number down to 200K (i think by denormalising it) and shrink the table size. I would appreciate your guidance on this before I go knock my head on the wall. :-) ---------------------------(end of broadcast)--------------------------- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match