I just ran into an article about Oracle setting a world record in some kind of test: http://www.oracle.com/corporate/press/2007_feb/TPC-H_300GB_Benchmark_wHP.html?rssid=rss_ocom_pr ...which made me think: postgresql aims at the same (or very similar) clients and use cases as Oracle, DB2 and MSSQL. I pose the question from an advocacy standpoint: why doesn't postgresql hold a world record of some sort (except performance/price)? Is it because the tests (time, expertise, hardware) are too expensive? Are the other RDBMSes simply faster? Something else? I'd like to know, because it'd be a hell of an argument to use when advocating the use of pgsql on a project: "well, we *could* go with MSSQL, but it's going to tie us up...when using multiple CPUs (licences), when deploying a failover solution (licences), when you want to work with spatial information or something else: but pgsql, on the other hand...it doesn't have that kind of licencing volatility, gives you everything it's got and achieves world record performance doing so..." That's the kind of leverage I'd like to have when talking about using pgsql with my colleagues. Anyone care to comment? Cheers, Tomislav