Re: cannot reboot one of 3 nodes without locking a cluster OSDs stay in...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



They are stopped gracefully. i did a reboot 2 days ago. but now it doesnt work.



2018-02-27 14:24 GMT+01:00 David Turner <drakonstein@xxxxxxxxx>:
`systemctl list-dependencies ceph.target`

I'm guessing that you might need to enable your osds to be managed by systemctl so that they can be stopped when the server goes down.

`systemctl enable ceph-osd@{osd number}`

On Tue, Feb 27, 2018, 4:13 AM Philip Schroth <philip.schroth@xxxxxxxxxx> wrote:
I have a 3 node production cluster. All works fine. but i have one failing node. i replaced one disk on sunday. everyting went fine. last night there was another disk broken. Ceph nicely maks it as down. but when i wanted to reboot this node now. all remaining osd's are being kept in and not marked as down. and the whole cluster locks during the reboot of this node. once i reboot one of the other two nodes when the first failing node is back it works like charm. only this node i cannot reboot anymore without locking which i could still on sunday... 

--
Met vriendelijke groet / With kind regards

Philip Schroth

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Met vriendelijke groet / With kind regards

Philip Schroth
T: +31630973268

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux