osds down after upgrade hammer to jewel

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

I'm upgrading ceph cluster from Hammer 0.94.9 to jewel 10.2.6.

The ceph cluster has 3 servers (one mon and one mds each) and another 6 servers with
12 osds each.
The monitoring and mds have been succesfully upgraded to latest jewel release, however
after upgrade the first osd server(12 osds), ceph is not aware of them and
are marked as down

ceph -s

 cluster 4a158d27-f750-41d5-9e7f-26ce4c9d2d45
     health HEALTH_WARN
[...]
            12/72 in osds are down
            noout flag(s) set
     osdmap e14010: 72 osds: 60 up, 72 in; 14641 remapped pgs
            flags noout
[...]

ceph osd tree

3   3.64000         osd.3          down  1.00000 1.00000
 8   3.64000         osd.8          down  1.00000 1.00000
14   3.64000         osd.14         down  1.00000 1.00000
18   3.64000         osd.18         down  1.00000          1.00000
21   3.64000         osd.21         down  1.00000          1.00000
28   3.64000         osd.28         down  1.00000          1.00000
31   3.64000         osd.31         down  1.00000          1.00000
37   3.64000         osd.37         down  1.00000          1.00000
42   3.64000         osd.42         down  1.00000          1.00000
47   3.64000         osd.47         down  1.00000          1.00000
51   3.64000         osd.51         down  1.00000          1.00000
56   3.64000         osd.56         down  1.00000          1.00000

If I run this command with one of the down osd
ceph osd in 14
osd.14 is already in.
however ceph doesn't mark it as up and the cluster health remains
in degraded state.

Do I have to upgrade all the osds to jewel first?
Any help as I'm running out of ideas?

Thanks
Jaime

--

Jaime Ibar
High Performance & Research Computing, IS Services
Lloyd Building, Trinity College Dublin, Dublin 2, Ireland.
http://www.tchpc.tcd.ie/ | jaime@xxxxxxxxxxxx
Tel: +353-1-896-3725

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux