Re: osd out cant' bring it back online

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2020-12-01 13:19, Oliver Weinmann wrote:

> 
> podman ps -a didn't show that container. So I googled and stumbled over
> this post:
> 
> https://github.com/containers/podman/issues/2553
> 
> I was able to fix it by running:
> 
> podman rm --storage
> e43f8533d6418267d7e6f3a408a566b4221df4fb51b13d71c634ee697914bad6
> 
> After that I reset the failure of the service and started it again.
> 
> systemctl reset-failed
> ceph-d0920c36-2368-11eb-a5de-005056b703af@osd.0.service
> systemctl start ceph-d0920c36-2368-11eb-a5de-005056b703af@osd.0.service

Ah, I had this issue once with my first venture into Ceph and docker.
There might be a "kill container" option in cephadm, just to make sure
it cleans up the container bit. I haven't touched cephadm yet, but IIRC
there is such an option.
> 
> Now ceph is doing its magic :)

Great!
> 
> [root@gedasvl02 ~]# ceph -s
> INFO:cephadm:Inferring fsid d0920c36-2368-11eb-a5de-005056b703af
> INFO:cephadm:Inferring config
> /var/lib/ceph/d0920c36-2368-11eb-a5de-005056b703af/mon.gedasvl02/config
> INFO:cephadm:Using recent ceph image docker.io/ceph/ceph:v15
>   cluster:
>     id:     d0920c36-2368-11eb-a5de-005056b703af
>     health: HEALTH_WARN
>             Degraded data redundancy: 1941/39432 objects degraded
> (4.922%), 19 pgs degraded, 19 pgs undersized
>             8 pgs not deep-scrubbed in time
> 
>   services:
>     mon: 1 daemons, quorum gedasvl02 (age 2w)
>     mgr: gedasvl02.vqswxg(active, since 2w), standbys: gedaopl02.yrwzqh
>     mds: cephfs:1 {0=cephfs.gedaopl01.zjuhem=up:active} 1 up:standby
>     osd: 3 osds: 3 up (since 9m), 3 in (since 9m); 18 remapped pgs
> 
>   task status:
>     scrub status:
>         mds.cephfs.gedaopl01.zjuhem: idle
> 
>   data:
>     pools:   7 pools, 225 pgs
>     objects: 13.14k objects, 77 GiB
>     usage:   214 GiB used, 457 GiB / 671 GiB avail
>     pgs:     1941/39432 objects degraded (4.922%)
>              206 active+clean
>              18  active+undersized+degraded+remapped+backfill_wait
>              1   active+undersized+degraded+remapped+backfilling
> 
>   io:
>     recovery: 105 MiB/s, 25 objects/s
> 
> Many thanks for your help. This was an excellent "Recovery training" :)

Yes, certainly the best way (and moment) to break Ceph and gain
experience ;-).

Gr. Stefan
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux