> Op 12 juli 2016 om 23:10 schreef Chandrasekhar Reddy <chandrasekhar.r@xxxxxxxxxx>: > > > Hi Wido, > > Thank you for helping out. it worked like charm. i followed this steps > > http://docs.ceph.com/docs/master/rados/operations/add-or-rm-mons/#removing-monitors > > can you help in sharing any good docs which deals with backups ? > Backups for Ceph really depend on the use-case, there is no general recommendation for backups. With Jewel for example you can use RBD mirroring to back up RBD data or with CephFS you can use old-fashion rsync. Wido > Thanks, > Chandra. > > On Tue, Jul 12, 2016 at 10:37 PM, Chandrasekhar Reddy < > chandrasekhar.r@xxxxxxxxxx> wrote: > > > Thanks wido.. I will give a try. > > > > Thanks, > > Chandra > > On Tue, Jul 12, 2016 at 10:35 PM, Wido den Hollander <wido@xxxxxxxx> > > wrote: > > > > > > > Op 12 juli 2016 om 19:00 schreef Chandrasekhar Reddy < > > chandrasekhar.r@xxxxxxxxxx>: > > > > > > > > > Thanks for quick reply.. > > > > > > Should I need to remove cephx in osd nodes also?? > > > > > disable all cephx on all nodes in the ceph.conf > > > > See: http://docs.ceph.com/docs/master/rados/configuration/auth-config-ref/ > > > > Add this to the [global] section: > > > > auth_cluster_required = none > > auth_service_required = none > > auth_client_required = none > > > > You still have the problem that your monitor map contains 3 monitors. You > > removed it from the ceph.conf, but that is not sufficient. You will need to > > inject the monmap with just one monitor into the remaining monitor. > > > > BEFORE YOU DO, CREATE A BACKUP OF THE MON'S DATA STORE. > > > > I don't know the commands from the top of my head, but 'monmaptool' is > > something you will need/want. > > > > Wido > > > > > Thanks, > > > Chandra > > > > > > On Tue, Jul 12, 2016 at 10:22 PM, Oliver Dzombic < > > info@xxxxxxxxxxxxxxxxx [info@xxxxxxxxxxxxxxxxx] > wrote: > > > Hi, > > > > > > fast aid: remove cephx authentication. > > > > > > -- > > > Mit freundlichen Gruessen / Best regards > > > > > > Oliver Dzombic > > > IP-Interactive > > > > > > mailto:info@xxxxxxxxxxxxxxxxx > > > > > > Anschrift: > > > > > > IP Interactive UG ( haftungsbeschraenkt ) > > > Zum Sonnenberg 1-3 > > > 63571 Gelnhausen > > > > > > HRB 93402 beim Amtsgericht Hanau > > > Geschäftsführung: Oliver Dzombic > > > > > > Steuer Nr.: 35 236 3622 1 > > > UST ID: DE274086107 > > > > > > > > > Am 12.07.2016 um 18:45 schrieb Chandrasekhar Reddy: > > > > Hi Guys, > > > > > > > > Need help. I had 3 monitors nodes and 2 went down ( Disk got corrupted > > > > ). after some time even 3rd monitor went unresponsive. so i rebooted > > the > > > > 3rd node. it came up but ceph is not working . > > > > > > > > so i tried to remove 2 failed monitors from ceph.conf file and > > restarted > > > > the mon and osd. but still ceph is not up. > > > > > > > > please find log files as attached. > > > > > > > > 1. Log file of ceph-mon.openstack01-vm001.log ( Monitor node ) > > > > > > > > http > > > > <http://paste.openstack.org/show/530944/>:// > > paste.openstack.org/show/530944/ > > > > <http://paste.openstack.org/show/530944/> > > > > > > > > 2. ceph.conf > > > > > > > > http > > > > <http://paste.openstack.org/show/530945/>:// > > paste.openstack.org/show/530945/ > > > > <http://paste.openstack.org/show/530945/> > > > > > > > > 3. ceph -w output > > > > > > > > http > > > > <http://paste.openstack.org/show/530947/>:// > > paste.openstack.org/show/530947/ > > > > <http://paste.openstack.org/show/530947/> > > > > > > > > 4. ceph mon dump > > > > > > > > http > > > > <http://paste.openstack.org/show/530950/>:// > > paste.openstack.org/show/530950/ > > > > <http://paste.openstack.org/show/530950/> > > > > > > > > what error i see is > > > > > > > > monclient(hunting): authenticate timed out after 300 > > > > > > > > librados: client.admin authentication error (110) Connection timed out > > > > > > > > Any suggestions? please help ... > > > > > > > > Thanks > > > > Chandra > > > > > > > > > > > > > > > > _______________________________________________ > > > > ceph-users mailing list > > > > ceph-users@xxxxxxxxxxxxxx > > > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > > > > > _______________________________________________ > > > ceph-users mailing list > > > ceph-users@xxxxxxxxxxxxxx > > > > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com_______________________________________________ > > > ceph-users mailing list > > > ceph-users@xxxxxxxxxxxxxx > > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com