I have test cluster
3 node:
1 - osd.0 mon.a mds.a
2 - osd.1
3 - empty
I create osd.2:
node1# ceph osd create
node3# mkdir /var/lib/ceph/osd/ceph-2
node3# mkfs.xfs /dev/sdb
node3# mount /dev/sdb /var/lib/ceph/osd/ceph-2
node3# ceph-osd -i 2 --mkfs --mkkey
copy keyring from node 3 to node 1 in root/keyring
node1# ceph auth add osd.2 osd 'allow *' mon 'allow rwx' -i keyring
node1# ceph osd crush set 2 1 root=default rack=unknownrack host=s3
node3# service ceph start
node1# ceph -s
health HEALTH_OK
monmap e1: 1 mons at {a=x.x.x.x:6789/0}, election epoch 1, quorum 0 a
osdmap e135: 3 osds: 2 up, 2 in
pgmap v6454: 576 pgs: 576 active+clean; 179 MB data, 2568 MB used, 137 GB / 139 GB avail
mdsmap e4: 1/1/1 up {0=a=up:active}
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com