Re: 6 pgs not deep-scrubbed in time

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We had the same problem. It turned out that one disk was slowly dying. It
was easy to identify by the commands (in your case):

ceph pg dump | grep -F 6.78
ceph pg dump | grep -F 6.60
…

This command shows the OSDs of a PG in square brackets. If is there always
the same number, then you've found the OSD which causes the slow scrubs.

Am Fr., 26. Jan. 2024 um 07:45 Uhr schrieb Michel Niyoyita <
micou12@xxxxxxxxx>:

> Hello team,
>
> I have a cluster in production composed by  3 osds servers with 20 disks
> each deployed using ceph-ansibleand ubuntu OS , and the version is pacific
> . These days is in WARN state caused by pgs which are not deep-scrubbed in
> time . I tried to deep-scrubbed some pg manually but seems that the cluster
> can be slow, would like your assistance in order that my cluster can be in
> HEALTH_OK state as before without any interuption of service . The cluster
> is used as openstack backend storage.
>
> Best Regards
>
> Michel
>
>
>  ceph -s
>   cluster:
>     id:     cb0caedc-eb5b-42d1-a34f-96facfda8c27
>     health: HEALTH_WARN
>             6 pgs not deep-scrubbed in time
>
>   services:
>     mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 (age 11M)
>     mgr: ceph-mon2(active, since 11M), standbys: ceph-mon3, ceph-mon1
>     osd: 48 osds: 48 up (since 11M), 48 in (since 11M)
>     rgw: 6 daemons active (6 hosts, 1 zones)
>
>   data:
>     pools:   10 pools, 385 pgs
>     objects: 5.97M objects, 23 TiB
>     usage:   151 TiB used, 282 TiB / 433 TiB avail
>     pgs:     381 active+clean
>              4   active+clean+scrubbing+deep
>
>   io:
>     client:   59 MiB/s rd, 860 MiB/s wr, 155 op/s rd, 665 op/s wr
>
> root@ceph-osd3:~# ceph health detail
> HEALTH_WARN 6 pgs not deep-scrubbed in time
> [WRN] PG_NOT_DEEP_SCRUBBED: 6 pgs not deep-scrubbed in time
>     pg 6.78 not deep-scrubbed since 2024-01-11T16:07:54.875746+0200
>     pg 6.60 not deep-scrubbed since 2024-01-13T19:44:26.922000+0200
>     pg 6.5c not deep-scrubbed since 2024-01-13T09:07:24.780936+0200
>     pg 4.12 not deep-scrubbed since 2024-01-13T09:09:22.176240+0200
>     pg 10.d not deep-scrubbed since 2024-01-12T08:04:02.078062+0200
>     pg 5.f not deep-scrubbed since 2024-01-12T06:06:00.970665+0200
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux