Database are designed to handle very large tables..but effectiveness is always at question. A full table scan on a partitioned table is always preferable to a FTS on a super large table. The nature of the query will of-course dictate performance..but you run into definite limitations with very large tables. On Fri, Oct 30, 2009 at 1:01 PM, Greg Stark <gsstark@xxxxxxx> wrote: > On Fri, Oct 30, 2009 at 12:53 PM, Anj Adu <fotographs@xxxxxxxxx> wrote: >> Any relational database worth its salt has partitioning for a reason. >> >> 1. Maintenance. You will need to delete data at some >> point.(cleanup)...Partitions are the only way to do it effectively. > > This is true and it's unavoidably a manual process. The database will > not know what segments of the data you intend to load and unload en > masse. > >> 2. Performance. Partitioning offer a way to query smaller slices of >> data automatically (i.e the query optimizer will choose the partition >> for you) ...very large tables are a no-no in any relational >> database.(sheer size has limitations) > > This I dispute. Databases are designed to be scalable and very large > tables should perform just as well as smaller tables. > > Where partitions win for performance is when you know something about > how your data is accessed and you can optimize the access by > partitioning along the same keys. For example if you're doing a > sequential scan of just one partition or doing a merge join of two > equivalently partitioned tables and the partitions can be sorted in > memory. > > However in these cases it is possible the database will become more > intelligent and be able to achieve the same performance gains > automatically. Bitmap index scans should perform comparably to the > sequential scan of individual partitions for example. > > -- > greg > -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance