An Update. the ceph node is back
--snip--ceph -s
cluster 23d53990-4458-4faf-a598-9c60036a51f3
health HEALTH_OK
monmap e1: 3 mons at {mon01=172.16.101.5:6789/0,mon02=172.16.101.6:6789/0,mon03=172.16.101.7:6789/0}, election epoch 30, quorum 0,1,2 mon01,mon02,mon03
osdmap e6653: 8 osds: 6 up, 6 in
pgmap v11876: 2752 pgs, 7 pools, 5864 MB data, 748 objects
18025 MB used, 11155 GB / 11172 GB avail
2752 active+clean
ceph osd tree
# id weight type name up/down reweight
-1 14.56 root default
-6 14.56 datacenter dc1
-7 14.56 row row1
-9 14.56 rack rack2
-2 3.64 host ceph01
0 1.82 osd.0 down 0
1 1.82 osd.1 down 0
-3 3.64 host ceph02
2 1.82 osd.2 up 1
3 1.82 osd.3 up 1
-4 3.64 host ceph03
4 1.82 osd.4 up 1
5 1.82 osd.5 up 1
-5 3.64 host ceph04
6 1.82 osd.6 up 1
7 1.82 osd.7 up 1
ceph-disk list
WARNING:ceph-disk:Old blkid does not support ID_PART_ENTRY_* fields, trying sgdisk; may not correctly identify ceph volumes with dmcrypt
/dev/sda :
/dev/sda1 other, LVM2_member
/dev/sdb :
/dev/sdb1 ceph data, prepared, cluster ceph, osd.0
/dev/sdc :
/dev/sdc1 ceph data, prepared, cluster ceph, osd.1
/dev/sdd :
/dev/sdd1 other
/dev/sdd2 other
/dev/sdd3 other
/dev/sdd4 other
/dev/sde :
/dev/sde1 other, xfs, mounted on /boot
/dev/sde2 other, LVM2_member
I would like to preserve the osd numbers.
Is it enough if I recreate the mount points or is it better to drop it ? On Fri, Apr 10, 2015 at 9:52 AM, 10 minus <t10tennn@xxxxxxxxx> wrote:
Thanks in advanceQuestion : Whats the best way to bring it back into the cluster ?I'm in the process of reinstalling the node.Hi ,We have a four node test setup Firefly with 2 OSDs each and one of the nodes just crashed. (root disk went bust) .
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com