Re: emperor -> firefly 0.80.7 upgrade problem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Sam,

> Sounds like you needed osd 20.  You can mark osd 20 lost.
> -Sam

Does not work:

# ceph osd lost 20 --yes-i-really-mean-it                                                       
osd.20 is not down or doesn't exist


Also, here is an interesting post which I will follow from October:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-October/044059.html

"
Hello, all. I got some advice from the IRC channel (thanks bloodice!) that I 
temporarily reduce the min_size of my cluster (size = 2) from 2 down to 1. 
That immediately caused all of my incomplete PGs to start recovering and 
everything seemed to come back OK. I was serving out and RBD from here and 
xfs_repair reported no problems. So... happy ending?

What started this all was that I was altering my CRUSH map causing significant 
rebalancing on my cluster which had size = 2. During this process I lost an 
OSD (osd.10) and eventually ended up with incomplete PGs. Knowing that I only 
lost 1 osd I was pretty sure that I hadn't lost any data I just couldn't get 
the PGs to recover without changing the min_size.
"

It is good that this worked for him, but it also seems like a bug that it 
worked!  (I.e. ceph should have been able to recover on its own without weird 
workarounds.)

I'll let you know if this works for me!

Thanks,
Chad.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux