On Tue, 23 Jul 2013, Sage Weil wrote: > On Tue, 23 Jul 2013, Stefan Priebe - Profihost AG wrote: > > i had the same reported some days ago. > > Yeah, it's in the tracker as bug #5704 and we're working on it right now. > Thanks! Joao just identified the bug. There is a workaround in wip-cuttlefish-osdmap that you can use to get your mons up immediately. A fix for the original bug is coming shortly. http://gitbuilder.ceph.com/ceph-deb-precise-x86_64-basic/ref/wip-cuttlefish-osdmap/ (packages for other distros also available at gitbuilder.ceph.com.) sage > > sage > > > > > > Stefan > > > > Am 23.07.2013 14:11, schrieb Piotr Lorek: > > > Hi, > > > > > > I have the same problem like Peter. I updated ceph from 0.61.4-1raring > > > to 0.61.5-1raring and all monitors started with no problems, but after > > > reboot they all are failed. > > > > > > Some logs: > > > > > > 2013-07-23 12:30:19.242684 7fd78e3927c0 0 ceph version 0.61.5 > > > (8ee10dc4bb73bdd918873f29c70eedc3c7ef1979), process ceph-mon, pid 9983 > > > 2013-07-23 12:30:19.340399 7fd78e3927c0 1 mon.vm2@-1(probing) e1 > > > preinit fsid c2a5b2b7-368b-457c-8e30-de5bad9a8f2b > > > 2013-07-23 12:30:19.355952 7fd78e3927c0 -1 mon/OSDMonitor.cc: In > > > function 'virtual void OSDMonitor::update_from_paxos(bool*)' thread > > > 7fd78e3927c0 time 2013-07-23 12:30:19.355114 > > > mon/OSDMonitor.cc: 132: FAILED assert(latest_bl.length() != 0) > > > > > > ceph version 0.61.5 (8ee10dc4bb73bdd918873f29c70eedc3c7ef1979) > > > 1: (OSDMonitor::update_from_paxos(bool*)+0x2525) [0x514d65] > > > 2: (PaxosService::refresh(bool*)+0xf8) [0x501388] > > > 3: (Monitor::refresh_from_paxos(bool*)+0x6f) [0x4a5fdf] > > > 4: (Monitor::init_paxos()+0x95) [0x4a6155] > > > 5: (Monitor::preinit()+0x6cd) [0x4c88cd] > > > 6: (main()+0x193d) [0x498f7d] > > > 7: (__libc_start_main()+0xf5) [0x7fd78c56bea5] > > > 8: /usr/bin/ceph-mon() [0x49b5a9] > > > NOTE: a copy of the executable, or `objdump -rdS <executable>` is > > > needed to interpret this. > > > > > > --- begin dump of recent events --- > > > -25> 2013-07-23 12:30:19.241487 7fd78e3927c0 5 asok(0x32120e0) > > > register_command perfcounters_dump hook 0x320c010 > > > -24> 2013-07-23 12:30:19.241519 7fd78e3927c0 5 asok(0x32120e0) > > > register_command 1 hook 0x320c010 > > > -23> 2013-07-23 12:30:19.241524 7fd78e3927c0 5 asok(0x32120e0) > > > register_command perf dump hook 0x320c010 > > > -22> 2013-07-23 12:30:19.241530 7fd78e3927c0 5 asok(0x32120e0) > > > register_command perfcounters_schema hook 0x320c010 > > > -21> 2013-07-23 12:30:19.241532 7fd78e3927c0 5 asok(0x32120e0) > > > register_command 2 hook 0x320c010 > > > -20> 2013-07-23 12:30:19.241533 7fd78e3927c0 5 asok(0x32120e0) > > > register_command perf schema hook 0x320c010 > > > -19> 2013-07-23 12:30:19.241538 7fd78e3927c0 5 asok(0x32120e0) > > > register_command config show hook 0x320c010 > > > -18> 2013-07-23 12:30:19.241542 7fd78e3927c0 5 asok(0x32120e0) > > > register_command config set hook 0x320c010 > > > -17> 2013-07-23 12:30:19.241545 7fd78e3927c0 5 asok(0x32120e0) > > > register_command log flush hook 0x320c010 > > > -16> 2013-07-23 12:30:19.241548 7fd78e3927c0 5 asok(0x32120e0) > > > register_command log dump hook 0x320c010 > > > -15> 2013-07-23 12:30:19.241550 7fd78e3927c0 5 asok(0x32120e0) > > > register_command log reopen hook 0x320c010 > > > -14> 2013-07-23 12:30:19.242684 7fd78e3927c0 0 ceph version 0.61.5 > > > (8ee10dc4bb73bdd918873f29c70eedc3c7ef1979), process ceph-mon, pid 9983 > > > -13> 2013-07-23 12:30:19.244525 7fd78e3927c0 5 asok(0x32120e0) init > > > /var/run/ceph/ceph-mon.vm2.asok > > > -12> 2013-07-23 12:30:19.244540 7fd78e3927c0 5 asok(0x32120e0) > > > bind_and_listen /var/run/ceph/ceph-mon.vm2.asok > > > -11> 2013-07-23 12:30:19.244568 7fd78e3927c0 5 asok(0x32120e0) > > > register_command 0 hook 0x320a0b8 > > > -10> 2013-07-23 12:30:19.244577 7fd78e3927c0 5 asok(0x32120e0) > > > register_command version hook 0x320a0b8 > > > -9> 2013-07-23 12:30:19.244583 7fd78e3927c0 5 asok(0x32120e0) > > > register_command git_version hook 0x320a0b8 > > > -8> 2013-07-23 12:30:19.244590 7fd78e3927c0 5 asok(0x32120e0) > > > register_command help hook 0x320c0d0 > > > -7> 2013-07-23 12:30:19.244676 7fd78a18b700 5 asok(0x32120e0) entry > > > start > > > -6> 2013-07-23 12:30:19.340328 7fd78e3927c0 1 -- > > > 10.110.128.202:6789/0 learned my addr 10.110.128.202:6789/0 > > > -5> 2013-07-23 12:30:19.340342 7fd78e3927c0 1 > > > accepter.accepter.bind my_inst.addr is 10.110.128.202:6789/0 need_addr=0 > > > -4> 2013-07-23 12:30:19.340362 7fd78e3927c0 5 adding auth protocol: > > > cephx > > > -3> 2013-07-23 12:30:19.340365 7fd78e3927c0 5 adding auth protocol: > > > cephx > > > -2> 2013-07-23 12:30:19.340399 7fd78e3927c0 1 mon.vm2@-1(probing) > > > e1 preinit fsid c2a5b2b7-368b-457c-8e30-de5bad9a8f2b > > > -1> 2013-07-23 12:30:19.354957 7fd78e3927c0 4 > > > mon.vm2@-1(probing).mds e7956 new map > > > 0> 2013-07-23 12:30:19.355952 7fd78e3927c0 -1 mon/OSDMonitor.cc: In > > > function 'virtual void OSDMonitor::update_from_paxos(bool*)' thread > > > 7fd78e3927c0 time 2013-07-23 12:30:19.355114 > > > mon/OSDMonitor.cc: 132: FAILED assert(latest_bl.length() != 0) > > > > > > ceph version 0.61.5 (8ee10dc4bb73bdd918873f29c70eedc3c7ef1979) > > > 1: (OSDMonitor::update_from_paxos(bool*)+0x2525) [0x514d65] > > > 2: (PaxosService::refresh(bool*)+0xf8) [0x501388] > > > 3: (Monitor::refresh_from_paxos(bool*)+0x6f) [0x4a5fdf] > > > 4: (Monitor::init_paxos()+0x95) [0x4a6155] > > > 5: (Monitor::preinit()+0x6cd) [0x4c88cd] > > > 6: (main()+0x193d) [0x498f7d] > > > 7: (__libc_start_main()+0xf5) [0x7fd78c56bea5] > > > 8: /usr/bin/ceph-mon() [0x49b5a9] > > > NOTE: a copy of the executable, or `objdump -rdS <executable>` is > > > needed to interpret this. > > > > > > Regards, > > > > > > Peter L > > > > > > > > > _______________________________________________ > > > ceph-users mailing list > > > ceph-users@xxxxxxxxxxxxxx > > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > _______________________________________________ > > ceph-users mailing list > > ceph-users@xxxxxxxxxxxxxx > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com