Scott Marlowe <smarlowe@xxxxxxxxxxxxxxxxx> writes: > Yes, it is a way. It's just a less necessary one than it once was, with > hardware now able to provide the same performance increase with little > or no work on the users part. We've got to weigh the increased > complexity it would take to implement it in Postgresql and maintain it > versus the gain, and I say the gain is smaller every day. Now I think you're contradicting your argument in the other subthread. It's certainly *much* more complex to have to implement this yourself for each table than to have it as a native postgres feature. So I think you're saying using partitioning in any form, whether native or home-brew, is better because of the simplicity. But if that's the argument then you're wrong about the high end controllers making this less urgent. High end hardware controllers only make it easier to gather the kind of data that requires some form of partitioning in one form or another to make it manageable. In any case partitioning offers algorithmic improvements in performance. No matter how fast your controller is it's not going to be able to delete 100G of data and match the speed of simply dropping a partition using DDL. Partitioning is something DBAs are doing more and more often as the data sets grow. And it's something Postgres DBAs are doing more and more often as Postgres moves into problem domains that were previously the domain of Oracle and DB2 DBAs. The only choice is whether they're doing it by kludging a failure-prone and suboptimal system or whether it's built into the database in a reliable, convenient, and well designed form. -- greg ---------------------------(end of broadcast)--------------------------- TIP 8: explain analyze is your friend