Re: Recovery question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Robert -

Where would that monitor data (database) be found?

--
Peter Hinman

On 7/29/2015 3:39 PM, Robert LeBlanc wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

If you built new monitors, this will not work. You would have to
recover the monitor data (database) from at least one monitor and
rebuild the monitor. The new monitors would not have any information
about pools, OSDs, PGs, etc to allow an OSD to be rejoined.
- ----------------
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1


On Wed, Jul 29, 2015 at 2:46 PM, Peter Hinman  wrote:
Hi Greg -

So at the moment, I seem to be trying to resolve a permission error.

  === osd.3 ===
  Mounting xfs on stor-2:/var/lib/ceph/osd/ceph-3
  2015-07-29 13:35:08.809536 7f0a0262e700  0 librados: osd.3 authentication
error (1) Operation not permitted
  Error connecting to cluster: PermissionError
  failed: 'timeout 30 /usr/bin/ceph -c /etc/ceph/ceph.conf --name=osd.3
--keyring=/var/lib/ceph/osd/ceph-3/keyring osd crush create-or-move -- 3
3.64 host=stor-2 root=default'
  ceph-disk: Error: ceph osd start failed: Command '['/usr/sbin/service',
'ceph', '--cluster', 'ceph', 'start', 'osd.3']' returned non-zero exit
status 1
  ceph-disk: Error: One or more partitions failed to activate


Is there a way to identify the cause of this PermissionError?  I've copied
the client.bootstrap-osd key from the output of ceph auth list, and pasted
it into /var/lib/ceph/bootstrap-osd/ceph.keyring, but that has not resolve
the error.

But it sounds like you are saying that even once I get this resolved, I have
no hope of recovering the data?

--
Peter Hinman

On 7/29/2015 1:57 PM, Gregory Farnum wrote:

This sounds like you're trying to reconstruct a cluster after destroying the
monitors. That is...not going to work well. The monitors define the cluster
and you can't move OSDs into different clusters. We have ideas for how to
reconstruct monitors and it can be done manually with a lot of hassle, but
the process isn't written down and there aren't really fools I help with it.
:/
-Greg

On Wed, Jul 29, 2015 at 5:48 PM Peter Hinman  wrote:
I've got a situation that seems on the surface like it should be
recoverable, but I'm struggling to understand how to do it.

I had a cluster of 3 monitors, 3 osd disks, and 3 journal ssds. After
multiple hardware failures, I pulled the 3 osd disks and 3 journal ssds
and am attempting to bring them back up again on new hardware in a new
cluster.  I see plenty of documentation on how to zap and initialize and
add "new" osds, but I don't see anything on rebuilding with existing osd
disks.

Could somebody provide guidance on how to do this?  I'm running 94.2 on
all machines.

Thanks,

--
Peter Hinman


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-----BEGIN PGP SIGNATURE-----
Version: Mailvelope v0.13.1
Comment: https://www.mailvelope.com

wsFcBAEBCAAQBQJVuUfwCRDmVDuy+mK58QAASJIP/0kEBx+h7LZfpQkgUvPG
hKzIlSzbWIkig9O5cYzXKh03jPFz1hj38YVQ+cdYuRA1VhrNdTkwyNnzVDFk
7R98PUF4eljNNnSdQ0nIIVCS8rtGWfSUU4ECo1/4Gm8ebIMmY/g6umE87oqy
fBmXW9luFZ3HQyoaqfALKWsesNJ9EJT/EgMH3+XisJZYPtpEbVDr0DiV2sbt
st1xtsQwkKOGAOr+7sGe7g9dED7zCERLWsNOpeHkeJaArbKDzGY1abpoiyUt
BQ5lCHGKZCBqXINaVTmwPGMTdKpED5eBxIXQ+QeEXwBONQuei4zkDz8TWRKO
zaNcogcaQilSg3KyjyHzovPzVoS0OGLmEK1FVtveUfMPfMQ9XXyGnhWiZ6u7
+grlQoe4E5AZTqEMtCzKyrqldWdzL8A+S9ZidtvSi1dCZpJutEkFbI/m8A5j
dA6Q7zijNJDPVMMsXXA08z6Pu7611mShXjW0fLu871++JsE/eS8GCfc9Cgyu
aUgcSaWCuRVa2laXak3BI+44AexsU3ZKyveDeuFdm7y3F+DS5FKZK2V8OfJn
/mbolRFyGCaBEj83FQJGCBrsSOzYDhas8aEDa4W9kKLbKeBaeRUE0mXQYfvu
12lZxpzn0UasrH/mcgu8ij9ElLN5Fq0wSp1SNKbg/RczcYVt/DjjGbCRDTgO
b23I
=1NQh
-----END PGP SIGNATURE-----


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux