Re: Remapped PGs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Still haven't figured this out. We went ahead and upgraded the entire
cluster to Podman 2.0.4 and in the process did OS/Kernel upgrades and
rebooted every node, one at a time. We've still got 5 PGs stuck in
'remapped' state, according to 'ceph -s' but 0 in the pg dump output in
that state. Does anybody have any suggestions on what to do about this?

On Wed, Aug 5, 2020 at 10:54 AM David Orman <ormandj@xxxxxxxxxxxx> wrote:

> Hi,
>
> We see that we have 5 'remapped' PGs, but are unclear why/what to do about
> it. We shifted some target ratios for the autobalancer and it resulted in
> this state. When adjusting ratio, we noticed two OSDs go down, but we just
> restarted the container for those OSDs with podman, and they came back up.
> Here's status output:
>
> ###################
> root@ceph01:~# ceph status
> INFO:cephadm:Inferring fsid x
> INFO:cephadm:Inferring config x
> INFO:cephadm:Using recent ceph image docker.io/ceph/ceph:v15
>   cluster:
>     id:     41bb9256-c3bf-11ea-85b9-9e07b0435492
>     health: HEALTH_OK
>
>   services:
>     mon: 5 daemons, quorum ceph01,ceph04,ceph02,ceph03,ceph05 (age 2w)
>     mgr: ceph03.ytkuyr(active, since 2w), standbys: ceph01.aqkgbl,
> ceph02.gcglcg, ceph04.smbdew, ceph05.yropto
>     osd: 168 osds: 168 up (since 2d), 168 in (since 2d); 5 remapped pgs
>
>   data:
>     pools:   3 pools, 1057 pgs
>     objects: 18.00M objects, 69 TiB
>     usage:   119 TiB used, 2.0 PiB / 2.1 PiB avail
>     pgs:     1056 active+clean
>              1    active+clean+scrubbing+deep
>
>   io:
>     client:   859 KiB/s rd, 212 MiB/s wr, 644 op/s rd, 391 op/s wr
>
> root@ceph01:~#
>
> ###################
>
> When I look at ceph pg dump, I don't see any marked as remapped:
>
> ###################
> root@ceph01:~# ceph pg dump |grep remapped
> INFO:cephadm:Inferring fsid x
> INFO:cephadm:Inferring config x
> INFO:cephadm:Using recent ceph image docker.io/ceph/ceph:v15
> dumped all
> root@ceph01:~#
> ###################
>
> Any idea what might be going on/how to recover? All OSDs are up. Health is
> 'OK'. This is Ceph 15.2.4 deployed using Cephadm in containers, on Podman
> 2.0.3.
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux