Re: CEPH OSD Restarts taking too long v10.2.9

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Nikhil R <nikh.ravindra@xxxxxxxxx> 于2019年3月29日周五 下午1:44写道:
>
> if i comment filestore_split_multiple = 72 filestore_merge_threshold = 480   in the ceph.conf wont ceph take the default value of 2 and 10 and we would be in more splits and crashes?
>
Yes, that aimed to make it clear what results in the long start time,
leveldb compact or filestore split?
> in.linkedin.com/in/nikhilravindra
>
>
>
> On Fri, Mar 29, 2019 at 6:55 AM huang jun <hjwsm1989@xxxxxxxxx> wrote:
>>
>> It seems like the split settings result the problem,
>> what about comment out those settings then see it still used that long
>> time to restart?
>> As a fast search in code, these two
>> filestore_split_multiple = 72
>> filestore_merge_threshold = 480
>> doesn't support online change.
>>
>> Nikhil R <nikh.ravindra@xxxxxxxxx> 于2019年3月28日周四 下午6:33写道:
>> >
>> > Thanks huang for the reply.
>> > Its is the disk compaction taking more time
>> > the disk i/o is completely utilized upto 100%
>> > looks like both osd_compact_leveldb_on_mount = false & leveldb_compact_on_mount = false isnt working as expected on ceph v10.2.9
>> > is there a way to turn off compaction?
>> >
>> > Also, the reason why we are restarting osd's is due to splitting and we increased split multiple and merge_threshold.
>> > Is there a way we would inject it? Is osd restarts the only solution?
>> >
>> > Thanks In Advance
>> >
>> > in.linkedin.com/in/nikhilravindra
>> >
>> >
>> >
>> > On Thu, Mar 28, 2019 at 3:58 PM huang jun <hjwsm1989@xxxxxxxxx> wrote:
>> >>
>> >> Did the time really cost on db compact operation?
>> >> or you can turn on debug_osd=20 to see what happens,
>> >> what about the disk util during start?
>> >>
>> >> Nikhil R <nikh.ravindra@xxxxxxxxx> 于2019年3月28日周四 下午4:36写道:
>> >> >
>> >> > CEPH osd restarts are taking too long a time
>> >> > below is my ceph.conf
>> >> > [osd]
>> >> > osd_compact_leveldb_on_mount = false
>> >> > leveldb_compact_on_mount = false
>> >> > leveldb_cache_size=1073741824
>> >> > leveldb_compression = false
>> >> > osd_mount_options_xfs = "rw,noatime,inode64,logbsize=256k"
>> >> > osd_max_backfills = 1
>> >> > osd_recovery_max_active = 1
>> >> > osd_recovery_op_priority = 1
>> >> > filestore_split_multiple = 72
>> >> > filestore_merge_threshold = 480
>> >> > osd_max_scrubs = 1
>> >> > osd_scrub_begin_hour = 22
>> >> > osd_scrub_end_hour = 3
>> >> > osd_deep_scrub_interval = 2419200
>> >> > osd_scrub_sleep = 0.1
>> >> >
>> >> > looks like both osd_compact_leveldb_on_mount = false & leveldb_compact_on_mount = false isnt working as expected on ceph v10.2.9
>> >> >
>> >> > Any ideas on a fix would be appreciated asap
>> >> > in.linkedin.com/in/nikhilravindra
>> >> >
>> >> > _______________________________________________
>> >> > ceph-users mailing list
>> >> > ceph-users@xxxxxxxxxxxxxx
>> >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >>
>> >>
>> >>
>> >> --
>> >> Thank you!
>> >> HuangJun
>>
>>
>>
>> --
>> Thank you!
>> HuangJun



-- 
Thank you!
HuangJun
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux