Hello everybody,
I had an HW failure and had to take an osd out however I now got
stale+active+clean.
I am okay with having zeros as replacement for the lost blocks, I want
the filesystem of the virtual machine that is using the pool to recover
if possible.
I can not find what to do in the documentation. Can someone help me out?
https://docs.ceph.com/en/quincy/rados/troubleshooting/troubleshooting-pg/
systemctl stop ceph-osd@11
ceph osd out 11
ceph osd lost 11 --yes-i-really-mean-it
ceph osd crush remove osd.11
ceph auth del osd.11
ceph osd rm 11
ceph pg dump_stuck stale
root@ceph03:~# ceph pg dump_stuck stale
ok
PG_STAT STATE UP UP_PRIMARY ACTING ACTING_PRIMARY
2.91 stale+active+clean [11] 11 [11] 11
2.ca stale+active+clean [11] 11 [11] 11
2.3e stale+active+clean [11] 11 [11] 11
2.e0 stale+active+clean [11] 11 [11] 11
2.57 stale+active+clean [11] 11 [11] 11
2.59 stale+active+clean [11] 11 [11] 11
2.89 stale+active+clean [11] 11 [11] 11
# ceph osd pool get libvirt-pool-backup size
size: 1
root@ceph03:~# ceph pg map 2.91
osdmap e105091 pg 2.91 (2.91) -> up [17] acting [17]
root@ceph03:~# ceph pg map 2.ca
osdmap e105091 pg 2.ca (2.ca) -> up [8] acting [8]
root@ceph03:~# ceph pg map 2.3e
osdmap e105091 pg 2.3e (2.3e) -> up [14] acting [14]
root@ceph03:~# ceph pg map 2.e0
osdmap e105091 pg 2.e0 (2.e0) -> up [14] acting [14]
root@ceph03:~# ceph pg map 2.57
osdmap e105091 pg 2.57 (2.57) -> up [17] acting [17]
root@ceph03:~# ceph pg map 2.59
osdmap e105091 pg 2.59 (2.59) -> up [8] acting [8]
root@ceph03:~# ceph pg map 2.89
osdmap e105091 pg 2.89 (2.89) -> up [2] acting [2]
root@ceph03:~# ceph pg 2.91 query
Error ENOENT: i don't have pgid 2.91
root@ceph03:~# ceph pg force_create_pg 2.91
Error ENOTSUP: this command is obsolete
Kind regards,
Jelle
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx