I am looking one problematic pg on my disaster scenario and look at bellow :
root@monitor~# ceph pg ls-by-pool cinder_sata | grep 5.5b7
5.5b7 26911 29 53851 107644 29 112248188928 53258 53258 active+recovering+undersized+degraded+remapped 2019-03-11 14:05:29.857657 95096'33589806 95169:37258027 [96,47,38] 96 [154] 15465986'27640790 2019-01-21 19:36:06.645070 65986'27640790 2019-01-21 19:36:06.645070
My problematic pg has 3 osds acting and one another osd acting primary :
up up_primary acting acting_primary
9 [96,47,38] 96 [154] 154
9 [96,47,38] 96 [154] 154
If I compare with a good one we look this :
up_primary acting acting_primary
85 [85,102,143] 85
85 [85,102,143] 85
The problematic pg scenario is a normal thing ?
Regards,
Fabio Abreu
On Mon, Mar 11, 2019 at 9:01 AM David Turner <drakonstein@xxxxxxxxx> wrote:
Ceph has been getting better and better about prioritizing this sorry of recovery, but free of those optimizations are in Jewel, which had been out of the support cycle for about a year. You should look into upgrading to mimic where you should see a pretty good improvement on this sorry of prioritization.On Sat, Mar 9, 2019, 3:10 PM Fabio Abreu <fabioabreureis@xxxxxxxxx> wrote:_______________________________________________HI Everybody,I have a doubt about degraded objects in the Jewel 10.2.7 version, can I priorize the degraded objects than misplaced?I asking this because I try simulate a disaster recovery scenario.Thanks and best regards,
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com