Re: when recovering start

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Oh, you also need to turn off "mon_osd_adjust_down_out_interval"

On Tue, Apr 7, 2015 at 8:57 PM, lijian <blacker1981@xxxxxxx> wrote:
>
> Haomai Wang,
>
> the mon_osd_down_out_interval is 300, please refer to my settings, and I use
> the cli 'service ceph stop osd.X' to stop a osd
> the pg status change to remap,backfill and recovering ... immediately
> so  other something wrong with my settings or operation?
>
> Thanks,
>
> Jian Ji
>
>
>
>
>
> At 2015-04-07 20:38:29, "Haomai Wang" <haomaiwang@xxxxxxxxx> wrote:
>>Whatever the version you tested, ceph won't recover data when you
>>manually stop osd immediately. And it will trigger mark down osd out
>>when it reach "mon_osd_down_out_interval" seconds.
>>
>>On Tue, Apr 7, 2015 at 8:33 PM, lijian <blacker1981@xxxxxxx> wrote:
>>> Hi,
>>> The recovering start delay 300s after I stop a osd and the osd status
>>> change
>>> from in to out,  the test ENV is Ceph 0.80.7
>>>
>>> But I test in ceph 0.87.1, the recovering start immediately after I stop
>>> a
>>> OSD,all the settings is the default value,the following is mon_osd*
>>> settings
>>> in my test ENV:
>>>   "mon_osd_laggy_halflife": "3600",
>>>   "mon_osd_laggy_weight": "0.3",
>>>   "mon_osd_adjust_heartbeat_grace": "true",
>>>   "mon_osd_adjust_down_out_interval": "true",
>>>   "mon_osd_auto_mark_in": "false",
>>>   "mon_osd_auto_mark_auto_out_in": "true",
>>>   "mon_osd_auto_mark_new_in": "true",
>>>   "mon_osd_down_out_interval": "300",
>>>   "mon_osd_down_out_subtree_limit": "rack",
>>>   "mon_osd_min_up_ratio": "0.3",
>>>   "mon_osd_min_in_ratio": "0.3",
>>>   "mon_osd_max_op_age": "32",
>>>   "mon_osd_max_split_count": "32",
>>>   "mon_osd_allow_primary_temp": "false",
>>>   "mon_osd_allow_primary_affinity": "false",
>>>   "mon_osd_full_ratio": "0.95",
>>>   "mon_osd_nearfull_ratio": "0.85",
>>>   "mon_osd_report_timeout": "45000",
>>>   "mon_osd_min_down_reporters": "50",
>>>   "mon_osd_min_down_reports": "150",
>>>   "mon_osd_force_trim_to": "0",
>>>
>>> so when the recovering start? why they are different with the two Ceph
>>> version, or someting wrong with my settings
>>>
>>> Thanks!
>>> Jian Li
>>>
>>>
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
>>
>>
>>--
>>Best Regards,
>>
>>Wheat
>
>
>



-- 
Best Regards,

Wheat
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux