On 03/11/2014 05:59 PM, Pawel Veselov wrote:
On Tue, Mar 11, 2014 at 9:15 AM, Joao Eduardo Luis <joao.luis@xxxxxxxxxxx <mailto:joao.luis@xxxxxxxxxxx>> wrote: On 03/10/2014 10:30 PM, Pawel Veselov wrote: Now, I'm getting this. May be any idea what can be done to straighten this up? This is weird. Can you please share the steps taken until this was triggered, as well as the rest of the log? At this point, no, sorry. This whole thing started with migrating from 0.56.7 to 0.72.2. First, we started seeing failed assertions of (version == pg_map.version) in PGMonitor.cc:273, but on one monitor (d) only. I attempted to resync the failing monitor (d) with --force-sync from (c). (d) started to work, but (c) started to fail with (version==pg_map.version) assertion. So, I tried re-syncing (c) from (d) with --force-resync. That's when (c) started to fail with this particular (ret==0) assertion. I don't really think that resyncing actually worked any at that point.
Considering you were upgrading from bobtail, any issues after the upgrade you may have found may have had something to do with improper store conversion -- usually due to somehow (explicitly or inadvertently) killing the monitor during conversion. Or it may have not, but we will never know without logs from back then.
Based on this, my guess is that you managed to bork the mon stores of both 'c' and 'd'. See, when you force a sync you're basically telling the monitor to delete its store's contents and sync from somebody else. If 'c' had a broken store after the conversion, that would have been propagated to 'd'. Once you forced the sync of 'c', then the problem would have been propagated from 'd' to 'c'.
I didn't find a way to fix this quickly enough, so I restored the mon directories from back up, and started again. The (version == pg_map.version) came back, but my back-up was taken before I was trying to do force-resync, but not before the migration started (that was stupid of me to not have backed up before migration). (That's the point when I tried all kindsa crazy stuff for a while). After some poking around, what I ended up doing is plain removing 'store.db' directory from the monitor fs, and starting the monitors. That just re-initiated the migration, and this time it was done in the absence of client requests, and one monitor at a time.
And in a case like this, I would think this was a smart choice, allowing the monitors to reconvert the store from the old plain, file-based format to the new store.db format. Given it worked, my guess is that the source of all your issues was an improperly converted monitor store -- but, once again, without the logs we can't ever be sure. :(
-Joao
0> 2014-03-10 22:26:23.757166 7fc0397e5700 -1 mon/AuthMonitor.cc: In function 'virtual void AuthMonitor::create_initial()' thread 7fc0397e5700 time 2014-03-10 22:26:23.755442 mon/AuthMonitor.cc: 101: FAILED assert(ret == 0) ceph version 0.72.2 (__a913ded2ff138aefb8cb84d347d721__64099cfd60) 1: (AuthMonitor::create_initial()__+0x4d8) [0x637bb8] 2: (PaxosService::_active()+__0x51b) [0x594fcb] 3: (Context::complete(int)+0x9) [0x565499] 4: (finish_contexts(CephContext*, std::list<Context*, std::allocator<Context*> >&, int)+0x95) [0x5698b5] 5: (Paxos::handle_accept(__MMonPaxos*)+0x885) [0x589595] 6: (Paxos::dispatch(__PaxosServiceMessage*)+0x28b) [0x58d66b] 7: (Monitor::dispatch(MonSession*__, Message*, bool)+0x4f0) [0x563620] 8: (Monitor::_ms_dispatch(__Message*)+0x1fb) [0x5639fb] 9: (Monitor::ms_dispatch(Message*__)+0x32) [0x57f212]
-- Joao Eduardo Luis Software Engineer | http://inktank.com | http://ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com