Yes, as I said that bug is marked resolved. It's also marked as only
affecting jewel and luminous.
I'm pointing out that it's still an issue today in mimic 13.2.4.
Simon
On 06/03/2019 16:04, Darius
Kasparavičius wrote:
For some reason I didn't notice that number.
That's
the misplaced objects, no problem there. Degraded objects are
at
153.818%.
Simon
On 06/03/2019 15:26, Darius Kasparavičius wrote:
> Hi,
>
> there it's 1.2% not 1200%.
>
> On Wed, Mar 6, 2019 at 4:36 PM Simon Ironside <sironside@xxxxxxxxxxxxx>
wrote:
>> Hi,
>>
>> I'm still seeing this issue during failure testing of
a new Mimic 13.2.4
>> cluster. To reproduce:
>>
>> - Working Mimic 13.2.4 cluster
>> - Pull a disk
>> - Wait for recovery to complete (i.e. back to
HEALTH_OK)
>> - Remove the OSD with `ceph osd crush remove`
>> - See greater than 100% degraded objects while it
recovers as below
>>
>> It doesn't seem to do any harm, once recovery
completes the cluster
>> returns to HEALTH_OK.
>> I can only find bug 21803 on the tracker that seems
to cover this
>> behaviour which is marked as resolved.
>>
>> Simon
>>
>> cluster:
>> id: MY ID
>> health: HEALTH_WARN
>> 709/58572 objects misplaced (1.210%)
>> Degraded data redundancy: 90094/58572
objects degraded
>> (153.818%), 49 pgs degraded, 51 pgs undersized
>>
>> services:
>> mon: 3 daemons, quorum
san2-mon1,san2-mon2,san2-mon3
>> mgr: san2-mon1(active), standbys: san2-mon2,
san2-mon3
>> osd: 52 osds: 52 up, 52 in; 84 remapped pgs
>>
>> data:
>> pools: 16 pools, 2016 pgs
>> objects: 19.52 k objects, 72 GiB
>> usage: 7.8 TiB used, 473 TiB / 481 TiB avail
>> pgs: 90094/58572 objects degraded
(153.818%)
>> 709/58572 objects misplaced (1.210%)
>> 1932 active+clean
>> 47
active+recovery_wait+undersized+degraded+remapped
>> 33 active+remapped+backfill_wait
>> 2
active+recovering+undersized+remapped
>> 1
active+recovery_wait+undersized+degraded
>> 1
active+recovering+undersized+degraded+remapped
>>
>> io:
>> client: 24 KiB/s wr, 0 op/s rd, 3 op/s wr
>> recovery: 0 B/s, 126 objects/s
|
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com