Hey Bryan, I suppose all nodes are using jumboframes (mtu 9000), right? I would suggest to check OSD->MON communication. Can you send the output os these commands for us? * ceph -s * ceph versions []'s Arthur (aKa Guilherme Geronimo) On 04/09/2019 14:18, Bryan Stillwell wrote:
Our test cluster is seeing a problem where peering is going incredibly slow shortly after upgrading it to Nautilus (14.2.2) from Luminous (12.2.12). From what I can tell it seems to be caused by "wait for new map" taking a long time. When looking at dump_historic_slow_ops on pretty much any OSD I see stuff like this: # ceph daemon osd.112 dump_historic_slow_ops [...snip...] { "description": "osd_pg_create(e180614 287.4b:177739 287.75:177739 287.1c3:177739 287.1cf:177739 287.1e1:177739 287.2dd:177739 287.2fc:177739 287.342:177739 287.382:177739)", "initiated_at": "2019-09-03 15:12:41.366514", "age": 4800.8847047119998, "duration": 4780.0579745630002, "type_data": { "flag_point": "started", "events": [ { "time": "2019-09-03 15:12:41.366514", "event": "initiated" }, { "time": "2019-09-03 15:12:41.366514", "event": "header_read" }, { "time": "2019-09-03 15:12:41.366501", "event": "throttled" }, { "time": "2019-09-03 15:12:41.366547", "event": "all_read" }, { "time": "2019-09-03 15:39:03.379456", "event": "dispatched" }, { "time": "2019-09-03 15:39:03.379477", "event": "wait for new map" }, { "time": "2019-09-03 15:39:03.522376", "event": "wait for new map" }, { "time": "2019-09-03 15:53:55.912499", "event": "wait for new map" }, { "time": "2019-09-03 15:59:37.909063", "event": "wait for new map" }, { "time": "2019-09-03 16:00:43.356023", "event": "wait for new map" }, { "time": "2019-09-03 16:20:50.575498", "event": "wait for new map" }, { "time": "2019-09-03 16:31:48.689415", "event": "started" }, { "time": "2019-09-03 16:32:21.424489", "event": "done" } ] } It always seems to be in osd_pg_create() with multiple "wait for new map" messages before it finally does something. What could be causing it so long to get the OSD map? The mons don't appear to be overloaded in any way. Thanks, Bryan _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx