comments inline
On 28.08.2017 18:31, hjcho616 wrote:
I'll see what I
can do on that... Looks like I may have to add another OSD
host as I utilized all of the SATA ports on those boards. =P
Ronny,
I am running
with size=2 min_size=1. I created everything with
ceph-deploy and didn't touch much of that pool settings...
I hope not, but sounds like I may have lost some files! I
do want some of those OSDs to come back online somehow... to
get that confidence level up. =P
This is a bad idea as you have found out. once your cluster is
healthy you should look at improving this.
The dead osd.3
message is probably me trying to stop and start the osd.
There were some cases where stop didn't kill the ceph-osd
process. I just started or restarted osd to try and see if
that worked.. After that, there were some reboots and I am
not seeing those messages after it...
when providing logs. try to move away the old one. do a single
startup. and post that. it makes it easier to read when you have a
single run in the file.
This is
something I am running at home. I am the only user. In a
way it is production environment but just driven by me. =)
Do you have any
suggestions to get any of those osd.3, osd.4, osd.5, and
osd.8 come back up without removing them? I have a feeling
I can get some data back with some of them intact.
just incase you are not able to make them run again, does not
automatically mean the data is lost. i have successfully recovered
lost object using these instructions
http://ceph.com/geen-categorie/incomplete-pgs-oh-my/
I would start by renaming the osd's log file, do a single try at
starting the osd. and posting that log. have you done anything to
the osd's that could make them not run ?
kind regards
Ronny Aasen
|
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com