Re: osd needs moer then one hour to start with heavy reads

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



After activating full debug level:

In the time from starting to be up the osd has all the time about 200MB/sec of reading....
a restart directly after the thos one just needed some seconds....

A part of the logs:

...........
2022-04-19T12:37:23.510+0200 7f1b0bbd9f00  2 osd.13 0 init /var/lib/ceph/osd/ceph-13 (looks like ssd)
2022-04-19T12:37:23.510+0200 7f1b0bbd9f00  2 osd.13 0 journal /var/lib/ceph/osd/ceph-13/journal
2022-04-19T12:37:23.510+0200 7f1b0bbd9f00  1 bluestore(/var/lib/ceph/osd/ceph-13) _mount path /var/lib/ceph/osd/ceph-13
2022-04-19T12:37:23.510+0200 7f1b0bbd9f00  0 bluestore(/var/lib/ceph/osd/ceph-13) _open_db_and_around read-only:0 repair:0
2022-04-19T12:37:23.510+0200 7f1b0bbd9f00  1 bluestore(/var/lib/ceph/osd/ceph-13) _set_cache_sizes cache_size 3221225472 meta 0.45 kv 0.45 data 0.06
2022-04-19T12:37:24.090+0200 7f1b0bbd9f00  1 bluestore(/var/lib/ceph/osd/ceph-13) _prepare_db_environment set db_paths to db,1824359920435 db.slow,1824359920435
2022-04-19T12:49:16.708+0200 7f1b0bbd9f00  1 bluestore(/var/lib/ceph/osd/ceph-13) _open_db opened rocksdb path db options compression=kNoCompression,max_write_buffer_number=32,min_write_buffer_number_to_merge=2,recycle_log_file_num=32,
compaction_style=kCompactionStyleLevel,write_buffer_size=67108864,target_file_size_base=67108864,max_background_compactions=31,level0_file_num_compaction_trigger=8,level0_slowdown_writes_trigger=32,level0_stop_writes_trigger=64,max_byt
es_for_level_base=536870912,compaction_threads=32,max_bytes_for_level_multiplier=8,flusher_threads=8,compaction_readahead_size=2MB
2022-04-19T12:49:16.708+0200 7f1b0bbd9f00  1 bluestore(/var/lib/ceph/osd/ceph-13) _open_super_meta old nid_max 547486
2022-04-19T12:49:16.724+0200 7f1b0bbd9f00  1 bluestore(/var/lib/ceph/osd/ceph-13) _open_super_meta old blobid_max 30720
2022-04-19T12:49:16.724+0200 7f1b0bbd9f00  1 bluestore(/var/lib/ceph/osd/ceph-13) _open_super_meta freelist_type bitmap
2022-04-19T12:49:16.724+0200 7f1b0bbd9f00  1 bluestore(/var/lib/ceph/osd/ceph-13) _open_super_meta ondisk_format 4 compat_ondisk_format 3
2022-04-19T12:49:16.724+0200 7f1b0bbd9f00  1 bluestore(/var/lib/ceph/osd/ceph-13) _open_super_meta min_alloc_size 0x1000
2022-04-19T12:49:16.744+0200 7f1b0bbd9f00  1 freelist init
2022-04-19T12:49:16.744+0200 7f1b0bbd9f00  1 freelist _read_cfg
2022-04-19T12:49:16.744+0200 7f1b0bbd9f00  1 bluestore(/var/lib/ceph/osd/ceph-13) _init_alloc opening allocation metadata
2022-04-19T12:49:19.464+0200 7f1b0bbd9f00  1 HybridAllocator _spillover_range constructing fallback allocator
2022-04-19T12:49:26.788+0200 7f1b0bbd9f00  1 bluestore(/var/lib/ceph/osd/ceph-13) _init_alloc loaded 967 GiB in 2226205 extents, allocator type hybrid, capacity 0x1bf1f800000, block size 0x1000, free 0xf1c9a0e000, fragmentation 0.02023
29
2022-04-19T12:49:29.120+0200 7f1b0bbd9f00  1 bluestore(/var/lib/ceph/osd/ceph-13) _prepare_db_environment set db_paths to db,1824359920435 db.slow,1824359920435
2022-04-19T13:01:29.347+0200 7f1b0bbd9f00  1 bluestore(/var/lib/ceph/osd/ceph-13) _open_db opened rocksdb path db options compression=kNoCompression,max_write_buffer_number=32,min_write_buffer_number_to_merge=2,recycle_log_file_num=32,
compaction_style=kCompactionStyleLevel,write_buffer_size=67108864,target_file_size_base=67108864,max_background_compactions=31,level0_file_num_compaction_trigger=8,level0_slowdown_writes_trigger=32,level0_stop_writes_trigger=64,max_byt
es_for_level_base=536870912,compaction_threads=32,max_bytes_for_level_multiplier=8,flusher_threads=8,compaction_readahead_size=2MB
2022-04-19T13:01:29.347+0200 7f1b0bbd9f00  1 bluestore(/var/lib/ceph/osd/ceph-13) _upgrade_super from 4, latest 4
2022-04-19T13:01:29.347+0200 7f1b0bbd9f00  1 bluestore(/var/lib/ceph/osd/ceph-13) _upgrade_super done
2022-04-19T13:01:29.379+0200 7f1b0bbd9f00  2 osd.13 0 journal looks like ssd
2022-04-19T13:01:29.379+0200 7f1b0bbd9f00  2 osd.13 0 boot
2022-04-19T13:01:29.423+0200 7f1aec8b8700  5 bluestore.MempoolThread(0x562144886b90) _resize_shards cache_size: 2838365951 kv_alloc: 1140850688 kv_used: 10753600 kv_onode_alloc: 167772160 kv_onode_used: 6257024 meta_alloc: 1140850688 m
eta_used: 13030 data_alloc: 218103808 data_used: 0
2022-04-19T13:01:29.431+0200 7f1b0bbd9f00  0 _get_class not permitted to load kvs
2022-04-19T13:01:29.447+0200 7f1b0bbd9f00  0 <cls> ./src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
2022-04-19T13:01:29.451+0200 7f1b0bbd9f00  0 _get_class not permitted to load sdk
2022-04-19T13:01:29.451+0200 7f1b0bbd9f00  0 _get_class not permitted to load lua
2022-04-19T13:01:29.463+0200 7f1b0bbd9f00  0 <cls> ./src/cls/hello/cls_hello.cc:316: loading cls_hello
2022-04-19T13:01:29.463+0200 7f1b0bbd9f00  0 osd.13 16079 crush map has features 432629239337189376, adjusting msgr requires for clients
2022-04-19T13:01:29.463+0200 7f1b0bbd9f00  0 osd.13 16079 crush map has features 432629239337189376 was 8705, adjusting msgr requires for mons
2022-04-19T13:01:29.463+0200 7f1b0bbd9f00  0 osd.13 16079 crush map has features 3314933000854323200, adjusting msgr requires for osds
2022-04-19T13:01:29.463+0200 7f1b0bbd9f00  1 osd.13 16079 check_osdmap_features require_osd_release unknown -> pacific
2022-04-19T13:01:31.091+0200 7f1b0bbd9f00  0 osd.13 16079 load_pgs ...........




-----Ursprüngliche Nachricht-----
Von: VELARTIS GmbH | Philipp Dürhammer <p.duerhammer@xxxxxxxxxxx>
Gesendet: Montag, 11. April 2022 13:52
An: Igor Fedotov <igor.fedotov@xxxxxxxx>; ceph-users@xxxxxxx
Betreff:  Re: osd needs moer then one hour to start with heavy reads

HI Igor,

This time it was maye only ~30 minutes. But it can take longer.
After one day running the restart took now 1:30 minutes

2022-04-11T13:43:59.832+0200 7f02feb14f00  0 bluestore(/var/lib/ceph/osd/ceph-24) _open_db_and_around read-only:0 repair:0
2022-04-11T13:45:34.688+0200 7f02feb14f00  0 _get_class not permitted to load kvs

directly again only some seconds:

2022-04-11T13:47:23.136+0200 7f4ad07f9f00  0 bluestore(/var/lib/ceph/osd/ceph-24) _open_db_and_around read-only:0 repair:0
2022-04-11T13:47:32.036+0200 7f4ad07f9f00  0 _get_class not permitted to load kvs

Should be standard config (I deployed from Proxmox)

ceph config dump
WHO    MASK  LEVEL     OPTION                                 VALUE           RO
  mon        advanced  auth_allow_insecure_global_id_reclaim  false
  mgr        advanced  mgr/pg_autoscaler/autoscale_profile    scale-down
  mgr        advanced  mgr/zabbix/identifier                  ceph1-cluster   *
  mgr        advanced  mgr/zabbix/zabbix_host                 192.168.11.101  *

thanks,
philipp



-----Ursprüngliche Nachricht-----
Von: Igor Fedotov <igor.fedotov@xxxxxxxx>
Gesendet: Montag, 11. April 2022 12:13
An: VELARTIS GmbH | Philipp Dürhammer <p.duerhammer@xxxxxxxxxxx>; ceph-users@xxxxxxx
Betreff: Re:  Re: osd needs moer then one hour to start with heavy reads

Hi Philipp,

does the effect persist if you perform OSD restart once again shortly after the first one?

Do you have any custom rocksdb/bluestore settings? "ceph config dump"

 From the log you attached I can see 30 min gap during the load:

2022-04-10T11:42:51.607+0200 7fe6e8ba2f00  0
bluestore(/var/lib/ceph/osd/ceph-12) _open_db_and_around read-only:0
repair:0
2022-04-10T12:12:36.924+0200 7fe6e8ba2f00  0 _get_class not permitted to load kvs


but you mentioned that it took more than an hour for OSD to load. Is there any other long-running stuff on startup? Or you just overestimated loading time? ;)


Thanks,

Igor


On 4/10/2022 10:18 PM, VELARTIS GmbH | Philipp Dürhammer wrote:
> Here is the full log after the osd was up successfully (but looong 
> time reading 100MB/sec)
>
> v2022-04-10T11:42:44.219+0200 7f68961f8700 -1 received  signal: 
> Terminated from /sbin/init noibrs noibpb nopti nospectre_v2 
> nospec_store_bypass_disable no_stf_barrier  (PID: 1) UID: 0
> 2022-04-10T11:42:44.219+0200 7f68961f8700 -1 osd.12 15550 *** Got 
> signal Terminated ***
> 2022-04-10T11:42:44.219+0200 7f68961f8700 -1 osd.12 15550 *** 
> Immediate shutdown (osd_fast_shutdown=true) ***
> 2022-04-10T11:42:47.007+0200 7fe6e8ba2f00  0 set uid:gid to
> 64045:64045 (ceph:ceph)
> 2022-04-10T11:42:47.007+0200 7fe6e8ba2f00  0 ceph version 16.2.7
> (f9aa029788115b5df5eeee328f584156565ee5b7) pacific (stable), process 
> ceph-osd, pid 1347448
> 2022-04-10T11:42:47.007+0200 7fe6e8ba2f00  0 pidfile_write: ignore 
> empty --pid-file
> 2022-04-10T11:42:47.575+0200 7fe6e8ba2f00  0 starting osd.12 osd_data
> /var/lib/ceph/osd/ceph-12 /var/lib/ceph/osd/ceph-12/journal
> 2022-04-10T11:42:47.603+0200 7fe6e8ba2f00  0 load: jerasure load: lrc
> load: isa
> 2022-04-10T11:42:48.247+0200 7fe6e8ba2f00  0 osd.12:0.OSDShard using 
> op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue,
> cutoff=196)
> 2022-04-10T11:42:48.555+0200 7fe6e8ba2f00  0 osd.12:1.OSDShard using 
> op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue,
> cutoff=196)
> 2022-04-10T11:42:48.863+0200 7fe6e8ba2f00  0 osd.12:2.OSDShard using 
> op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue,
> cutoff=196)
> 2022-04-10T11:42:49.167+0200 7fe6e8ba2f00  0 osd.12:3.OSDShard using 
> op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue,
> cutoff=196)
> 2022-04-10T11:42:49.463+0200 7fe6e8ba2f00  0 osd.12:4.OSDShard using 
> op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue,
> cutoff=196)
> 2022-04-10T11:42:50.191+0200 7fe6e8ba2f00  0 osd.12:5.OSDShard using 
> op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue,
> cutoff=196)
> 2022-04-10T11:42:50.931+0200 7fe6e8ba2f00  0 osd.12:6.OSDShard using 
> op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue,
> cutoff=196)
> 2022-04-10T11:42:51.607+0200 7fe6e8ba2f00  0 osd.12:7.OSDShard using 
> op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue,
> cutoff=196)
> 2022-04-10T11:42:51.607+0200 7fe6e8ba2f00  0
> bluestore(/var/lib/ceph/osd/ceph-12) _open_db_and_around read-only:0
> repair:0
> 2022-04-10T12:12:36.924+0200 7fe6e8ba2f00  0 _get_class not permitted 
> to load kvs
> 2022-04-10T12:12:36.924+0200 7fe6e8ba2f00  0 <cls>
> ./src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
> 2022-04-10T12:12:36.924+0200 7fe6e8ba2f00  0 _get_class not permitted 
> to load sdk
> 2022-04-10T12:12:36.928+0200 7fe6e8ba2f00  0 _get_class not permitted 
> to load lua
> 2022-04-10T12:12:36.940+0200 7fe6e8ba2f00  0 <cls>
> ./src/cls/hello/cls_hello.cc:316: loading cls_hello
> 2022-04-10T12:12:36.940+0200 7fe6e8ba2f00  0 osd.12 15550 crush map 
> has features 432629239337189376, adjusting msgr requires for clients
> 2022-04-10T12:12:36.940+0200 7fe6e8ba2f00  0 osd.12 15550 crush map 
> has features 432629239337189376 was 8705, adjusting msgr requires for 
> mons
> 2022-04-10T12:12:36.940+0200 7fe6e8ba2f00  0 osd.12 15550 crush map 
> has features 3314933000854323200, adjusting msgr requires for osds
>
> -----Ursprüngliche Nachricht-----
> Von: VELARTIS GmbH | Philipp Dürhammer <p.duerhammer@xxxxxxxxxxx>
> Gesendet: Sonntag, 10. April 2022 11:49
> An: ceph-users@xxxxxxx
> Betreff:  osd needs moer then one hour to start with heavy 
> reads
>
> Hi,
>
> allways when i try the start / restart an osd it takes more then one hour to start up. Meanwhile I see more then 100Mb read on the osd the whole time. (it's an enterprise ssd) logs hows:
> 2022-04-10T11:42:44.219+0200 7f68961f8700 -1 osd.12 15550 *** 
> Immediate shutdown (osd_fast_shutdown=true) ***
> 2022-04-10T11:42:47.007+0200 7fe6e8ba2f00  0 set uid:gid to
> 64045:64045 (ceph:ceph)
> 2022-04-10T11:42:47.007+0200 7fe6e8ba2f00  0 ceph version 16.2.7
> (f9aa029788115b5df5eeee328f584156565ee5b7) pacific (stable), process 
> ceph-osd, pid 1347448
> 2022-04-10T11:42:47.007+0200 7fe6e8ba2f00  0 pidfile_write: ignore 
> empty --pid-file
> 2022-04-10T11:42:47.575+0200 7fe6e8ba2f00  0 starting osd.12 osd_data
> /var/lib/ceph/osd/ceph-12 /var/lib/ceph/osd/ceph-12/journal
> 2022-04-10T11:42:47.603+0200 7fe6e8ba2f00  0 load: jerasure load: lrc
> load: isa
> 2022-04-10T11:42:48.247+0200 7fe6e8ba2f00  0 osd.12:0.OSDShard using 
> op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue,
> cutoff=196)
> 2022-04-10T11:42:48.555+0200 7fe6e8ba2f00  0 osd.12:1.OSDShard using 
> op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue,
> cutoff=196)
> 2022-04-10T11:42:48.863+0200 7fe6e8ba2f00  0 osd.12:2.OSDShard using 
> op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue,
> cutoff=196)
> 2022-04-10T11:42:49.167+0200 7fe6e8ba2f00  0 osd.12:3.OSDShard using 
> op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue,
> cutoff=196)
> 2022-04-10T11:42:49.463+0200 7fe6e8ba2f00  0 osd.12:4.OSDShard using 
> op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue,
> cutoff=196)
> 2022-04-10T11:42:50.191+0200 7fe6e8ba2f00  0 osd.12:5.OSDShard using 
> op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue,
> cutoff=196)
> 2022-04-10T11:42:50.931+0200 7fe6e8ba2f00  0 osd.12:6.OSDShard using 
> op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue,
> cutoff=196)
> 2022-04-10T11:42:51.607+0200 7fe6e8ba2f00  0 osd.12:7.OSDShard using 
> op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue,
> cutoff=196)
> 2022-04-10T11:42:51.607+0200 7fe6e8ba2f00  0
> bluestore(/var/lib/ceph/osd/ceph-12) _open_db_and_around read-only:0
> repair:0
> (END)
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
> email to ceph-users-leave@xxxxxxx
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
> email to ceph-users-leave@xxxxxxx

--
Igor Fedotov
Ceph Lead Developer

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492 Com. register: Amtsgericht Munich HRB 231263
Web: https://croit.io | YouTube: https://goo.gl/PGE1Bx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx

-----Ursprüngliche Nachricht-----
Von: VELARTIS GmbH | Philipp Dürhammer <p.duerhammer@xxxxxxxxxxx> 
Gesendet: Montag, 11. April 2022 13:52
An: Igor Fedotov <igor.fedotov@xxxxxxxx>; ceph-users@xxxxxxx
Betreff:  Re: osd needs moer then one hour to start with heavy reads

HI Igor,

This time it was maye only ~30 minutes. But it can take longer.
After one day running the restart took now 1:30 minutes

2022-04-11T13:43:59.832+0200 7f02feb14f00  0 bluestore(/var/lib/ceph/osd/ceph-24) _open_db_and_around read-only:0 repair:0
2022-04-11T13:45:34.688+0200 7f02feb14f00  0 _get_class not permitted to load kvs

directly again only some seconds:

2022-04-11T13:47:23.136+0200 7f4ad07f9f00  0 bluestore(/var/lib/ceph/osd/ceph-24) _open_db_and_around read-only:0 repair:0
2022-04-11T13:47:32.036+0200 7f4ad07f9f00  0 _get_class not permitted to load kvs

Should be standard config (I deployed from Proxmox)

ceph config dump
WHO    MASK  LEVEL     OPTION                                 VALUE           RO
  mon        advanced  auth_allow_insecure_global_id_reclaim  false
  mgr        advanced  mgr/pg_autoscaler/autoscale_profile    scale-down
  mgr        advanced  mgr/zabbix/identifier                  ceph1-cluster   *
  mgr        advanced  mgr/zabbix/zabbix_host                 192.168.11.101  *

thanks,
philipp



-----Ursprüngliche Nachricht-----
Von: Igor Fedotov <igor.fedotov@xxxxxxxx>
Gesendet: Montag, 11. April 2022 12:13
An: VELARTIS GmbH | Philipp Dürhammer <p.duerhammer@xxxxxxxxxxx>; ceph-users@xxxxxxx
Betreff: Re:  Re: osd needs moer then one hour to start with heavy reads

Hi Philipp,

does the effect persist if you perform OSD restart once again shortly after the first one?

Do you have any custom rocksdb/bluestore settings? "ceph config dump"

 From the log you attached I can see 30 min gap during the load:

2022-04-10T11:42:51.607+0200 7fe6e8ba2f00  0
bluestore(/var/lib/ceph/osd/ceph-12) _open_db_and_around read-only:0
repair:0
2022-04-10T12:12:36.924+0200 7fe6e8ba2f00  0 _get_class not permitted to load kvs


but you mentioned that it took more than an hour for OSD to load. Is there any other long-running stuff on startup? Or you just overestimated loading time? ;)


Thanks,

Igor


On 4/10/2022 10:18 PM, VELARTIS GmbH | Philipp Dürhammer wrote:
> Here is the full log after the osd was up successfully (but looong 
> time reading 100MB/sec)
>
> v2022-04-10T11:42:44.219+0200 7f68961f8700 -1 received  signal: 
> Terminated from /sbin/init noibrs noibpb nopti nospectre_v2 
> nospec_store_bypass_disable no_stf_barrier  (PID: 1) UID: 0
> 2022-04-10T11:42:44.219+0200 7f68961f8700 -1 osd.12 15550 *** Got 
> signal Terminated ***
> 2022-04-10T11:42:44.219+0200 7f68961f8700 -1 osd.12 15550 *** 
> Immediate shutdown (osd_fast_shutdown=true) ***
> 2022-04-10T11:42:47.007+0200 7fe6e8ba2f00  0 set uid:gid to
> 64045:64045 (ceph:ceph)
> 2022-04-10T11:42:47.007+0200 7fe6e8ba2f00  0 ceph version 16.2.7
> (f9aa029788115b5df5eeee328f584156565ee5b7) pacific (stable), process 
> ceph-osd, pid 1347448
> 2022-04-10T11:42:47.007+0200 7fe6e8ba2f00  0 pidfile_write: ignore 
> empty --pid-file
> 2022-04-10T11:42:47.575+0200 7fe6e8ba2f00  0 starting osd.12 osd_data
> /var/lib/ceph/osd/ceph-12 /var/lib/ceph/osd/ceph-12/journal
> 2022-04-10T11:42:47.603+0200 7fe6e8ba2f00  0 load: jerasure load: lrc
> load: isa
> 2022-04-10T11:42:48.247+0200 7fe6e8ba2f00  0 osd.12:0.OSDShard using 
> op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue,
> cutoff=196)
> 2022-04-10T11:42:48.555+0200 7fe6e8ba2f00  0 osd.12:1.OSDShard using 
> op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue,
> cutoff=196)
> 2022-04-10T11:42:48.863+0200 7fe6e8ba2f00  0 osd.12:2.OSDShard using 
> op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue,
> cutoff=196)
> 2022-04-10T11:42:49.167+0200 7fe6e8ba2f00  0 osd.12:3.OSDShard using 
> op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue,
> cutoff=196)
> 2022-04-10T11:42:49.463+0200 7fe6e8ba2f00  0 osd.12:4.OSDShard using 
> op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue,
> cutoff=196)
> 2022-04-10T11:42:50.191+0200 7fe6e8ba2f00  0 osd.12:5.OSDShard using 
> op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue,
> cutoff=196)
> 2022-04-10T11:42:50.931+0200 7fe6e8ba2f00  0 osd.12:6.OSDShard using 
> op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue,
> cutoff=196)
> 2022-04-10T11:42:51.607+0200 7fe6e8ba2f00  0 osd.12:7.OSDShard using 
> op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue,
> cutoff=196)
> 2022-04-10T11:42:51.607+0200 7fe6e8ba2f00  0
> bluestore(/var/lib/ceph/osd/ceph-12) _open_db_and_around read-only:0
> repair:0
> 2022-04-10T12:12:36.924+0200 7fe6e8ba2f00  0 _get_class not permitted 
> to load kvs
> 2022-04-10T12:12:36.924+0200 7fe6e8ba2f00  0 <cls>
> ./src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
> 2022-04-10T12:12:36.924+0200 7fe6e8ba2f00  0 _get_class not permitted 
> to load sdk
> 2022-04-10T12:12:36.928+0200 7fe6e8ba2f00  0 _get_class not permitted 
> to load lua
> 2022-04-10T12:12:36.940+0200 7fe6e8ba2f00  0 <cls>
> ./src/cls/hello/cls_hello.cc:316: loading cls_hello
> 2022-04-10T12:12:36.940+0200 7fe6e8ba2f00  0 osd.12 15550 crush map 
> has features 432629239337189376, adjusting msgr requires for clients
> 2022-04-10T12:12:36.940+0200 7fe6e8ba2f00  0 osd.12 15550 crush map 
> has features 432629239337189376 was 8705, adjusting msgr requires for 
> mons
> 2022-04-10T12:12:36.940+0200 7fe6e8ba2f00  0 osd.12 15550 crush map 
> has features 3314933000854323200, adjusting msgr requires for osds
>
> -----Ursprüngliche Nachricht-----
> Von: VELARTIS GmbH | Philipp Dürhammer <p.duerhammer@xxxxxxxxxxx>
> Gesendet: Sonntag, 10. April 2022 11:49
> An: ceph-users@xxxxxxx
> Betreff:  osd needs moer then one hour to start with heavy 
> reads
>
> Hi,
>
> allways when i try the start / restart an osd it takes more then one hour to start up. Meanwhile I see more then 100Mb read on the osd the whole time. (it's an enterprise ssd) logs hows:
> 2022-04-10T11:42:44.219+0200 7f68961f8700 -1 osd.12 15550 *** 
> Immediate shutdown (osd_fast_shutdown=true) ***
> 2022-04-10T11:42:47.007+0200 7fe6e8ba2f00  0 set uid:gid to
> 64045:64045 (ceph:ceph)
> 2022-04-10T11:42:47.007+0200 7fe6e8ba2f00  0 ceph version 16.2.7
> (f9aa029788115b5df5eeee328f584156565ee5b7) pacific (stable), process 
> ceph-osd, pid 1347448
> 2022-04-10T11:42:47.007+0200 7fe6e8ba2f00  0 pidfile_write: ignore 
> empty --pid-file
> 2022-04-10T11:42:47.575+0200 7fe6e8ba2f00  0 starting osd.12 osd_data
> /var/lib/ceph/osd/ceph-12 /var/lib/ceph/osd/ceph-12/journal
> 2022-04-10T11:42:47.603+0200 7fe6e8ba2f00  0 load: jerasure load: lrc
> load: isa
> 2022-04-10T11:42:48.247+0200 7fe6e8ba2f00  0 osd.12:0.OSDShard using 
> op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue,
> cutoff=196)
> 2022-04-10T11:42:48.555+0200 7fe6e8ba2f00  0 osd.12:1.OSDShard using 
> op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue,
> cutoff=196)
> 2022-04-10T11:42:48.863+0200 7fe6e8ba2f00  0 osd.12:2.OSDShard using 
> op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue,
> cutoff=196)
> 2022-04-10T11:42:49.167+0200 7fe6e8ba2f00  0 osd.12:3.OSDShard using 
> op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue,
> cutoff=196)
> 2022-04-10T11:42:49.463+0200 7fe6e8ba2f00  0 osd.12:4.OSDShard using 
> op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue,
> cutoff=196)
> 2022-04-10T11:42:50.191+0200 7fe6e8ba2f00  0 osd.12:5.OSDShard using 
> op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue,
> cutoff=196)
> 2022-04-10T11:42:50.931+0200 7fe6e8ba2f00  0 osd.12:6.OSDShard using 
> op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue,
> cutoff=196)
> 2022-04-10T11:42:51.607+0200 7fe6e8ba2f00  0 osd.12:7.OSDShard using 
> op scheduler ClassedOpQueueScheduler(queue=WeightedPriorityQueue,
> cutoff=196)
> 2022-04-10T11:42:51.607+0200 7fe6e8ba2f00  0
> bluestore(/var/lib/ceph/osd/ceph-12) _open_db_and_around read-only:0
> repair:0
> (END)
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
> email to ceph-users-leave@xxxxxxx
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
> email to ceph-users-leave@xxxxxxx

--
Igor Fedotov
Ceph Lead Developer

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492 Com. register: Amtsgericht Munich HRB 231263
Web: https://croit.io | YouTube: https://goo.gl/PGE1Bx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux