Brad Nicholson wrote:
On 8/10/2010 2:38 PM, Karl Denninger wrote:CAREFUL with that model and beliefs.Scott Marlowe wrote:A valid case is a Slony replica if used for query offloading (not for DR). It's considered a read-only subscriber from the perspective of Slony as only Slony can modify the data (although you are technically correct, it is not read only - controlled write may be more accurate).A read-only slave isn't read-only, is it?On Tue, Aug 10, 2010 at 12:13 PM, Karl Denninger <karl@xxxxxxxxxxxxx> wrote:ANY disk that says "write is complete" when it really is not is entirely unsuitable for ANY real database use. It is simply a matter of timeWhat about read only slaves where there's a master with 100+spinning hard drives "getting it right" and you need a half dozen or so read slaves? I can imagine that being ok, as long as you don't restart a server after a crash without checking on it. Specifically, the following will hose you without warning: 1. SLONY gets a change on the master. 2. SLONY commits it to the (read-only) slave. 3. Confirmation comes back to the master that the change was propagated. 4. Slave CRASHES without actually committing the changed data to stable storage. When the slave restarts it will not know that the transaction was lost. Neither will the master, since it was told that it was committed. Slony will happily go on its way and replicate forward, without any indication of a problem - except that on the slave, there are one or more transactions that are **missing**. Some time later you issue an update that goes to the slave, but the change previously lost causes the slave commit to violate referential integrity. SLONY will fail to propagate that change and all behind it - it effectively locks at that point in time. You can recover from this by dropping the slave from replication and re-inserting it, but that forces a full-table copy of everything in the replication set. The bad news is that the queries to the slave in question may have been returning erroneous data for some unknown period of time prior to the lockup in replication (which hopefully you detect reasonably quickly - you ARE watching SLONY queue depth with some automated process, right?) I can both cause this in the lab and have had it happen in the field. It's a nasty little problem that bit me on a series of disks that claimed to have write caching off, but in fact did not. I was very happy that the data on the master was good at that point, as if I had needed to failover to the slave (thinking it was a "good" copy) I would have been in SERIOUS trouble. -- Karl |
begin:vcard fn:Karl Denninger n:Denninger;Karl email;internet:karl@xxxxxxxxxxxxx x-mozilla-html:TRUE version:2.1 end:vcard
-- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance