Did the time really cost on db compact operation? or you can turn on debug_osd=20 to see what happens, what about the disk util during start? Nikhil R <nikh.ravindra@xxxxxxxxx> 于2019年3月28日周四 下午4:36写道: > > CEPH osd restarts are taking too long a time > below is my ceph.conf > [osd] > osd_compact_leveldb_on_mount = false > leveldb_compact_on_mount = false > leveldb_cache_size=1073741824 > leveldb_compression = false > osd_mount_options_xfs = "rw,noatime,inode64,logbsize=256k" > osd_max_backfills = 1 > osd_recovery_max_active = 1 > osd_recovery_op_priority = 1 > filestore_split_multiple = 72 > filestore_merge_threshold = 480 > osd_max_scrubs = 1 > osd_scrub_begin_hour = 22 > osd_scrub_end_hour = 3 > osd_deep_scrub_interval = 2419200 > osd_scrub_sleep = 0.1 > > looks like both osd_compact_leveldb_on_mount = false & leveldb_compact_on_mount = false isnt working as expected on ceph v10.2.9 > > Any ideas on a fix would be appreciated asap > in.linkedin.com/in/nikhilravindra > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -- Thank you! HuangJun _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com