Re: degraded objects increasing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Angelo,

>From my experience, I guess the objects written to degraded pg is immediately degraded. As the total number of objects is increasing, I think the increase of degraded objects is normal.

Weiwen Hu

> 在 2023年6月15日,23:40,Angelo Höngens <angelo@xxxxxxxxxx> 写道:
> 
> Hey guys,
> 
> I'm trying to understand what is happening in my cluster, I see the
> number of degraded objects increasing, while all OSD's are still up
> and running.
> 
> Can someone explain what's happening? I would expect the number of
> misplaced objects to increase when ceph's balancing algorithm decides
> blocks should be on different OSDs, but I would only expect the
> degraded objects to increase when OSD's die??
> 
> 6/15/23 11:31:25 AM[WRN]Health check update: Degraded data redundancy:
> 809035/219783315 objects degraded (0.368%), 6 pgs degraded, 6 pgs
> undersized (PG_DEGRADED)
> 6/15/23 11:31:20 AM[WRN]Health check update: Degraded data redundancy:
> 809044/219788544 objects degraded (0.368%), 6 pgs degraded, 6 pgs
> undersized (PG_DEGRADED)
> 6/15/23 11:31:15 AM[WRN]Health check update: Degraded data redundancy:
> 809044/219788616 objects degraded (0.368%), 6 pgs degraded, 6 pgs
> undersized (PG_DEGRADED)
> 6/15/23 11:31:10 AM[WRN]Health check update: Degraded data redundancy:
> 808944/219777540 objects degraded (0.368%), 6 pgs degraded, 6 pgs
> undersized (PG_DEGRADED)
> 6/15/23 11:31:05 AM[WRN]Health check update: Degraded data redundancy:
> 808944/219776271 objects degraded (0.368%), 6 pgs degraded, 6 pgs
> undersized (PG_DEGRADED)
> 6/15/23 11:31:00 AM[WRN]Health check update: Degraded data redundancy:
> 808821/219683475 objects degraded (0.368%), 6 pgs degraded, 6 pgs
> undersized (PG_DEGRADED)
> 6/15/23 11:30:55 AM[WRN]Health check update: Degraded data redundancy:
> 808740/219672240 objects degraded (0.368%), 6 pgs degraded, 6 pgs
> undersized (PG_DEGRADED)
> 6/15/23 11:30:50 AM[WRN]Health check update: Degraded data redundancy:
> 808667/219645417 objects degraded (0.368%), 6 pgs degraded, 6 pgs
> undersized (PG_DEGRADED)
> 
> health:
> health: HEALTH_WARN
>            Degraded data redundancy: 810779/220602543 objects
> degraded (0.368%), 6 pgs degraded, 6 pgs undersized
> 
>  services:
>    mon: 3 daemons, quorum ceph-mon01,ceph-mon02,ceph-mon03 (age 12h)
>    mgr: ceph-mon01.vzbglj(active, since 12h), standbys: ceph-mon02.qtuntk
>    mds: 1/1 daemons up, 2 standby
>    osd: 118 osds: 118 up (since 12h), 118 in (since 23h); 510 remapped pgs
> 
>  data:
>    volumes: 1/1 healthy
>    pools:   5 pools, 1249 pgs
>    objects: 44.55M objects, 60 TiB
>    usage:   118 TiB used, 2.0 PiB / 2.1 PiB avail
>    pgs:     810779/220602543 objects degraded (0.368%)
>             39586388/220602543 objects misplaced (17.945%)
>             739 active+clean
>             406 active+remapped+backfill_wait
>             98  active+remapped+backfilling
>             6   active+undersized+degraded+remapped+backfilling
> 
>  io:
>    client:   585 KiB/s rd, 22 MiB/s wr, 357 op/s rd, 1.41k op/s wr
>    recovery: 150 MiB/s, 82 objects/s
> 
> Kind regards,
> 
> Angelo Hongens
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux