On Mon, Jul 29, 2013 at 3:02 AM, Ben Chobot <bench@xxxxxxxxxxxxxxx> wrote: > Anybody? > > On Jul 3, 2013, at 3:23 PM, Ben Chobot wrote: > > We have an async streaming setup using 9.1.9 and 3 nodes - let's call them > A, B, and C. A is the master, B and C are slaves. Today, A crashed, so we > made B be the master and told C to follow along with the switch by changing > the primary_conninfo in it's recovery.conf, making sure the history file had > made it to the WAL archive, then restarting it. That's worked very well for > us in the past, but not so much today. When C came back online, it started > complaining about missing WALs: > [...] > LOG: streaming replication successfully connected to primary > 2013-07-03T21:23:31.123647+00:00 pgdb41-vpc postgres[29754]: [3-1] db=,user= > FATAL: could not receive data from WAL stream: FATAL: requested WAL > segment 000000100000146A00000001 has already been removed > > At this point, my understanding of postgres must be wrong, because it > appears to me that the slave is looking for WAL 146A/01 because that's where > it reached consistent state. However, that was in the previous timeline - we > didn't get to the 10 history timeline till 146A/0C: > > # cat 00000010.history > 15 0000000F0000146A0000000C no recovery target specified > > > Shouldn't postgres know to be looking for "0000000F0000146A00000001", not > "000000100000146A00000001"? I'm trying to see what part of our process we > have wrong to have ended up in this state but I'm missing it. > > > For what it's worth the new master (node B) certainly seems to have all the > WAL files you might expect. Here's some snippets of an ls -l, but all the > files are there in between the snippets. > [...] > -rw------- 1 postgres postgres 16777216 Jul 3 21:15 > 0000000F0000146A00000000 > -rw------- 1 postgres postgres 16777216 Jul 3 21:15 > 0000000F0000146A00000001 <- the consistent state seems to be found here > -rw------- 1 postgres postgres 16777216 Jul 3 21:15 > 0000000F0000146A00000002 > -rw------- 1 postgres postgres 16777216 Jul 3 21:15 [...] > 000000100000146A0000000C <- timeline switches here > -rw------- 1 postgres postgres 16777216 Jul 3 21:25 > 000000100000146A0000000D > -rw------- 1 postgres postgres 16777216 Jul 3 21:27 > 000000100000146A0000000E > -rw------- 1 postgres postgres 16777216 Jul 3 21:28 > 000000100000146A0000000F > -rw------- 1 postgres postgres 16777216 Jul 3 21:30 > 000000100000146A00000010 > -rw------- 1 postgres postgres 16777216 Jul 3 21:32 > 000000100000146A00000011 > -rw------- 1 postgres postgres 16777216 Jul 3 21:34 > 000000100000146A00000012 > I think, the WAL recycling on standby names the recycled segments with the latest timelineID (in this case it's 0x10) which creates WALs that there shouldn't have been like 000000100000146A00000001 instead of 0000000F0000146A00000001. This patch recently applied to 9.1.9 (but not in any stable release so far) solves this problem as far as I can see. Try and see if you can patch it: http://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=424cc31a3785bd01108e6f4b20941c6442d3d2d0 -- Amit Langote -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general