Quick note, please stick to text formatted email for the mailing list, it's the preferred format. On Tue, Feb 9, 2010 at 9:09 PM, Jayadevan M <Jayadevan.Maymala@xxxxxxxxxx> wrote: > > Hello all, > Apologies for the long mail. > I work for a company that is provides solutions mostly on a Java/Oracle platform. Recently we moved on of our products to PostgreSQL. The main reason was PostgreSQL's GIS capabilities and the inability of government departments (especially road/traffic) to spend a lot of money for such projects. This product is used to record details about accidents and related analysis (type of road, when/why etc) with maps. Fortunately, even in India, an accident reporting application does not have to handle many tps :). So, I can't say PostgreSQL's performance was really tested in this case. > Later, I tested one screen of one of our products - load testing with Jmeter. We tried it with Oracle, DB2, PostgreSQL and Ingres, and PostgreSQL easily out-performed the rest. We tried a transaction mix with 20+ SELECTS, update, delete and a few inserts. Please note that benchmarking oracle (and a few other commercial dbs) and then publishing those results without permission of oracle is considered to be in breech of their contract. Yeah, another wonderful aspect of using Oracle. That said, and as someone who is not an oracle licensee in any way, this mimics my experience that postgresql is a match for oracle, db2, and most other databases in the simple, single db on commodity hardware scenario. > After a really good experience with the database, I subscribed to all PostgreSQL groups (my previous experience is all-Oracle) and reading these mails, I realized that many organizations are using plan, 'not customized' PostgreSQL for databases that handle critical applications. Since there is no company trying to 'sell' PostgreSQL, many of us are not aware of such cases. Actually there are several companies that sell pgsql service, and some that sell customized versions. RedHat, Command Prompt, EnterpriseDB, and so on. > Could some of you please share some info on such scenarios- where you are supporting/designing/developing databases that run into at least a few hundred GBs of data (I know, that is small by todays' standards)? There are other instances of folks on the list sharing this kind of info you can find by searching the archives. I've used pgsql for about 10 years for anywhere from a few megabytes to hundreds of gigabytes, and all kinds of applications. Where I currently work we have a main data store for a web app that is about 180Gigabytes and growing, running on three servers with slony replication. We handle somewhere in the range of 10k to 20k queries per minute (a mix of 90% or so reads to 10% writes). Peak load can be into the 30k or higher reqs / minute. The two big servers that handle this load are dual quad core opteron 2.1GHz machines with 32Gig RAM and 16 15krpm SAS drives configured as 2 in RAID-1 for OS and pg_xlog, 2 hot spares, and 12 in a RAID-10 for the main data. HW Raid controller is the Areca 1680 which is mostly stable, except for the occasional (once a year or so) hang problem which has been described, and which Areca has assured me they are working on. Our total downtime due to database outages in the last year or so has been 10 to 20 minutes, and that was due to a RAID card driver bug that hits us about once every 300 to 400 days. the majority of the down time has been waiting for our hosting provider to hit the big red switch and restart the main server. Our other pgsql servers provide search facility, with a db size of around 300Gig, and statistics at around ~1TB. > I am sure PostgreSQL has matured a lot more from the days when these case studies where posted. I went through the case studies at EnterpiseDB and similar vendors too. But those are customized PostgreSQL servers. Not necessarily. They sell support more than anything, and the majority of customization is not for stability but for additional features, such as mpp queries or replication etc. The real issue you run into is that many people don't want to tip their hand that they are using pgsql because it is a competitive advantage. It's inexpensive, capable, and relatively easy to use. If your competitor is convinced that Oracle or MSSQL server with $240k in licensing each year is the best choice, and you're whipping them with pgsql, the last thing you want is for them to figure that out and switch. -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance