ceph firefly PGs in active+clean+scrubbing state

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I've upgraded to 0.80.1 on a testing instance: the cluster gets
cyclically active+clean+deep scrubbing for a little while and then
reaches active+clean status. I'm not worried about this, I think it's
normal, but I didn't have this behaviour on emperor 0.72.2.

Cheers,
Fabrizio

On 13 May 2014 06:08, Alexandre DERUMIER <aderumier at odiso.com> wrote:
> 0.80.1 update has fixed the problem.
>
> thanks to ceph team !
>
> ----- Mail original -----
>
> De: "Simon Ironside" <sironside at caffetine.org>
> ?: ceph-users at lists.ceph.com
> Envoy?: Lundi 12 Mai 2014 18:13:32
> Objet: Re: ceph firefly PGs in active+clean+scrubbing state
>
> Hi,
>
> I'm sure I saw on the IRC channel yesterday that this is a known problem
> with Firefly which is due to be fixed with the release (possibly today?)
> of 0.80.1.
>
> Simon
>
> On 12/05/14 14:53, Alexandre DERUMIER wrote:
>> Hi, I observe the same behaviour on a test ceph cluster (upgrade from emperor to firefly)
>>
>>
>> cluster 819ea8af-c5e2-4e92-81f5-4348e23ae9e8
>> health HEALTH_OK
>> monmap e3: 3 mons at ..., election epoch 12, quorum 0,1,2 0,1,2
>> osdmap e94: 12 osds: 12 up, 12 in
>> pgmap v19001: 592 pgs, 4 pools, 30160 MB data, 7682 objects
>> 89912 MB used, 22191 GB / 22279 GB avail
>> 588 active+clean
>> 4 active+clean+scrubbing
>>
>> ----- Mail original -----
>>
>> De: "Fabrizio G. Ventola" <fabrizio.ventola at uniba.it>
>> ?: ceph-users at lists.ceph.com
>> Envoy?: Lundi 12 Mai 2014 15:42:03
>> Objet: ceph firefly PGs in active+clean+scrubbing state
>>
>> Hello, last week I've upgraded from 0.72.2 to last stable firefly 0.80
>> following the suggested procedure (upgrade in order monitors, OSDs,
>> MDSs, clients) on my 2 different clusters.
>>
>> Everything is ok, I've HEALTH_OK on both, the only weird thing is that
>> few PGs remain in active+clean+scrubbing. I've tried to query the PG
>> and reboot the involved OSD daemons and hosts but the issue is still
>> present and the involved PGs with +scrubbing state changes.
>>
>> I've tried as well to put noscrub on OSDs with "ceph osd set noscrub"
>> nut nothing changed.
>>
>> What can I do? I attach the cluster statuses and their cluster maps:
>>
>> FIRST CLUSTER:
>>
>> health HEALTH_OK
>> mdsmap e510: 1/1/1 up {0=ceph-mds1=up:active}, 1 up:standby
>> osdmap e4604: 5 osds: 5 up, 5 in
>> pgmap v138288: 1332 pgs, 4 pools, 117 GB data, 30178 objects
>> 353 GB used, 371 GB / 724 GB avail
>> 1331 active+clean
>> 1 active+clean+scrubbing
>>
>> # id weight type name up/down reweight
>> -1 0.84 root default
>> -7 0.28 rack rack1
>> -2 0.14 host cephosd1-dev
>> 0 0.14 osd.0 up 1
>> -3 0.14 host cephosd2-dev
>> 1 0.14 osd.1 up 1
>> -8 0.28 rack rack2
>> -4 0.14 host cephosd3-dev
>> 2 0.14 osd.2 up 1
>> -5 0.14 host cephosd4-dev
>> 3 0.14 osd.3 up 1
>> -9 0.28 rack rack3
>> -6 0.28 host cephosd5-dev
>> 4 0.28 osd.4 up 1
>>
>> SECOND CLUSTER:
>>
>> health HEALTH_OK
>> osdmap e158: 10 osds: 10 up, 10 in
>> pgmap v9724: 2001 pgs, 6 pools, 395 MB data, 139 objects
>> 1192 MB used, 18569 GB / 18571 GB avail
>> 1998 active+clean
>> 3 active+clean+scrubbing
>>
>> # id weight type name up/down reweight
>> -1 18.1 root default
>> -2 9.05 host wn-recas-uniba-30
>> 0 1.81 osd.0 up 1
>> 1 1.81 osd.1 up 1
>> 2 1.81 osd.2 up 1
>> 3 1.81 osd.3 up 1
>> 4 1.81 osd.4 up 1
>> -3 9.05 host wn-recas-uniba-32
>> 5 1.81 osd.5 up 1
>> 6 1.81 osd.6 up 1
>> 7 1.81 osd.7 up 1
>> 8 1.81 osd.8 up 1
>> 9 1.81 osd.9 up 1
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users at lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users at lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users at lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux