Table performance with millions of rows

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Question on large tables…


When should one consider table partitioning vs. just stuffing 10 million rows into one table?

I currently have CDR’s that are injected into a table at the rate of over 100,000 a day, which is large.


At some point I’ll want to prune these records out, so being able to just drop or truncate the table in one shot makes child table partitions attractive.


>From a pure data warehousing standpoint, what are the do’s/don’t of keeping such large tables?

Other notes…
- This table is never updated, only appended (CDR’s)
- Right now daily SQL called to delete records older than X days. (costly, purging ~100k records at a time)



--
inoc.net!rblayzor
XMPP: rblayzor.AT.inoc.net
PGP:  https://inoc.net/~rblayzor/



















[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux