Search Postgresql Archives

Re: Justifying a PG over MySQL approach to a project

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Actually, the DB I'm working on is rather small but has a somewhat complex system of constraints and triggers that maintain the data.  Queries will outnumber writes (20x at least).  And the DB has to be mirrorred at a sister site a couple thousand miles away, so I'm looking for a robust DB replication system for that.  

These are the key points they will be worried about...
- DB up time (most important), including recovery time after disasters (e.g. power outages)
- Data integrity.  I'm addressing this with constraints and using triggers to populate columns with derived data.
- Data Quality.  NO CORRUPT TABLES / INDEXES
- Retrofitting existing apps to work with PG.  Perl/DBI is a subtle change in the DBD designation.  Some Tcl-MySQL code is tougher.  I'm proposing changing everything to go through ODBC as a standard now, and for the future.
- Cost of maintainence.  Do I have to babysit this DB 4 hours every day, or does it run by itself?  Is this like Oracle where we have to hire professional 24x7 DBAs, or is this hands-off?  That kind of question.

I have a DB up and working.  Runs great, no problems, but very lightly loaded and/or used at this time.  Having worked with PG in the past, I'm not worried about this piece.

I am more concerned with getting a robust DB replication system up and running.  Bucardo looks pretty good, but I've just started looking at the options.  Any suggestions?

Thanks!


-----Original Message-----
From: Erik Jones [mailto:ejones@xxxxxxxxxxxxxx] 
Sent: Thursday, December 17, 2009 4:42 AM
To: Craig Ringer
Cc: Gauthier, Dave; pgsql-general@xxxxxxxxxxxxxx
Subject: Re:  Justifying a PG over MySQL approach to a project


On Dec 16, 2009, at 10:30 PM, Craig Ringer wrote:

> - If you don't care about your data, MySQL used with MyISAM is *crazy* fast for lots of small simple queries.

This one causes me no end of grief as too often it's simply touted as "MyISAM is fast(er)" while leaving of the bit about "for lots of small, simple queries".  Developers then pick MySQL with MyISAM storage and then scratch their heads saying, "But!  I heard it was faster...," when I tell them the reason their app is crawling is because they have even moderately complex reads or writes starving out the rest of their app thanks to the table locks required by MyISAM.  As you mentioned, for the type of active workloads that MyISAM is good for, you might as well just use memcache over something more reliable and/or concurrent, or even a simple key-value or document store if you really don't need transactions.

Erik Jones, Database Administrator
Engine Yard
Support, Scalability, Reliability
866.518.9273 x 260
Location: US/Pacific
IRC: mage2k






-- 
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux