Re: Remove - down_osds_we_would_probe

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



And finally works.
Thanks.
Now i need to see another erros. My Cluster is very problematic.

Em sáb, 19 de nov de 2016 às 19:12, Bruno Silva <bemanuel.pe@xxxxxxxxx> escreveu:
I put and didn't works, in the end i put an osd with id 5 in production.


Em sáb, 19 de nov de 2016 às 17:46, Paweł Sadowski <ceph@xxxxxxxxx> escreveu:
Hi,

Make a temporary OSD with the same ID and weight 0 to avoid putting data
on it. Cluster should contact this OSD and move forward. If not you can
also use 'ceph osd lost ID' but OSD with that ID must exists in crushmap
(and this probably not the case here).

On 19.11.2016 13:46, Bruno Silva wrote:
> Version: Hammer
> On my cluster a pg is saying:
>      "down_osds_we_would_probe": [
>                 5
>             ],
>
> But this osd was removed. How can i solve this.
> Reading on group list ceph-users they say that this could be the
> reason to my cluster is stoped.
> How can i solve this?
>

--
PS

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux