Hi,
what i do to reproduce the failure:
root@cephadmin:~# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.26340 root default
-2 0.08780 host cephosd01
0 0.04390 osd.0 up 1.00000 1.00000
1 0.04390 osd.1 up 1.00000 1.00000
-3 0.08780 host cephosd02
2 0.04390 osd.2 up 1.00000 1.00000
3 0.04390 osd.3 up 1.00000 1.00000
-4 0.08780 host cephosd03
4 0.04390 osd.4 up 1.00000 1.00000
5 0.04390 osd.5 up 1.00000 1.00000
root@cephadmin:~# ssh cephosd01 shutdown -r
Shutdown scheduled for Mon 2016-07-11 14:44:17 CEST, use 'shutdown -c'
to cancel.
root@cephadmin:~# ssh cephosd01 uptime
14:44:45 up 0 min, 0 users, load average: 0.44, 0.10, 0.03
root@cephadmin:~# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.26340 root default
-2 0.08780 host cephosd01
0 0.04390 osd.0 down 1.00000 1.00000
1 0.04390 osd.1 down 1.00000 1.00000
-3 0.08780 host cephosd02
2 0.04390 osd.2 up 1.00000 1.00000
3 0.04390 osd.3 up 1.00000 1.00000
-4 0.08780 host cephosd03
4 0.04390 osd.4 up 1.00000 1.00000
5 0.04390 osd.5 up 1.00000 1.00000
here are some logs of osd.0....
root@cephosd01:~# tail /var/log/ceph/ceph-osd.0.log
2016-07-11 14:44:24.588509 7f8228d72800 1 -- :/2152 shutdown complete.
2016-07-11 14:44:39.243944 7efe11c0e800 0 set uid:gid to 64045:64045
(ceph:ceph)
2016-07-11 14:44:39.258622 7efe11c0e800 0 ceph version 10.2.2
(45107e21c568dd033c2f0a3107dec8f0b0e58374), process ceph-osd, pid 1018
2016-07-11 14:44:39.268743 7efe11c0e800 -1 ** ERROR: unable to open OSD
superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory
2016-07-11 14:44:40.578479 7f05a6b42800 0 set uid:gid to 64045:64045
(ceph:ceph)
2016-07-11 14:44:40.578591 7f05a6b42800 0 ceph version 10.2.2
(45107e21c568dd033c2f0a3107dec8f0b0e58374), process ceph-osd, pid 1177
2016-07-11 14:44:40.578771 7f05a6b42800 -1 ** ERROR: unable to open OSD
superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory
2016-07-11 14:44:41.794321 7fc756fc8800 0 set uid:gid to 64045:64045
(ceph:ceph)
2016-07-11 14:44:41.794423 7fc756fc8800 0 ceph version 10.2.2
(45107e21c568dd033c2f0a3107dec8f0b0e58374), process ceph-osd, pid 1378
2016-07-11 14:44:41.794601 7fc756fc8800 -1 ** ERROR: unable to open OSD
superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory
but i can start it manually...
root@cephosd01:~# mount /dev/sdb1 /var/lib/ceph/osd/ceph-0
root@cephosd01:~# ceph-osd -i 0
starting osd.0 at :/0 osd_data /var/lib/ceph/osd/ceph-0
/var/lib/ceph/osd/ceph-0/journal
root@cephadmin:~# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.26340 root default
-2 0.08780 host cephosd01
0 0.04390 osd.0 up 1.00000 1.00000
1 0.04390 osd.1 down 1.00000 1.00000
-3 0.08780 host cephosd02
2 0.04390 osd.2 up 1.00000 1.00000
3 0.04390 osd.3 up 1.00000 1.00000
-4 0.08780 host cephosd03
4 0.04390 osd.4 up 1.00000 1.00000
5 0.04390 osd.5 up 1.00000 1.00000
The osd have been created with:
ceph-deploy osd prepare cephosd01:sdb cephosd01:sdc
i'm not sure where to cearch..
Dirk
Am 11.07.2016 um 14:35 schrieb Oliver Dzombic:
Hi Dirk,
without any information, its impossible to tell you anything.
Please provide us some detail information about what is going wrong,
including error messages and so on.
As an admin you should be enough familar with your system to give us
more information but just "its not working". As you know, this
information does not help.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com