david@xxxxxxx wrote:
various people (not database experts) are pushing to install Oracle cluster so that they can move all of these to one table with a customerID column.
They're blowing smoke if they think Oracle can do this. One of my applications had this exact same problem -- table-per-customer versus big-table-for-everyone. Oracle fell over dead, even with the best indexing possible, tuned by the experts, and using partitions keyed to the customerID. We ended up breaking it up into table-per-customer because Oracle fell over dead when we had to do a big update on a customer's entire dataset. All other operations were slowed by the additional index on the customer-ID, especially complex joins. With a table-for-everyone, you're forced to create tricky partitioning or clustering, clever indexes, and even with that, big updates are problematic. And once you do this, then you become heavily tied to one RDBMS and your applications are no longer portable, because clustering, indexing, partitioning and other DB tuning tricks are very specific to each RDBMS. When we moved to Postgres, we never revisited this issue, because both Oracle and Postgres are able to handle thousands of tables well. As I wrote in a previous message on a different topic, often the design of your application is more important than the performance. In our case, the table-per-customer makes the applications simpler, and security is MUCH easier. Oracle is simply not better than Postgres in this regard. As far as I know, there is only one specific situation (discussed frequently here) where Oracle is faster: the count(), min() and max() functions, and I know significant progress has been made since I started using Postgres. I have not found any other query where Oracle is significantly better, and I've found several where Postgres is the clear winner. It's telling that Oracle's license contract prohibits you from publishing comparisons and benchmarks. You have to wonder why. Craig