Re: log_latency slow operation observed for submit_transact, latency = 22.644258499s

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Torkil,

highly likely you're facing a well known issue with RocksDB performance drop after bulk data removal. The latter might occur at source OSDs after PG migration completion.

You might want to use DB compaction (preferably offline one using ceph-kvstore-tool) to get OSD out of this "degraded" state or as a preventive measure. I'd recommend to do that for all the OSDs right now. And once again after rebalancing is completed.  This should improve things but unfortunately no 100% guarantee.

Also curious if you have DB/WAL on fast (SSD or NVMe) drives? This might be crucial..


Thanks,

Igor

On 3/22/2024 9:59 AM, Torkil Svensgaard wrote:
Good morning,

Cephadm Reef 18.2.1. We recently added 4 hosts and changed a failure domain from host to datacenter which is the reason for the large misplaced percentage.

We were seeing some pretty crazy spikes in "OSD Read Latencies" and "OSD Write Latencies" on the dashboard. Most of the time everything is well but then for periods of time, 1-4 hours, latencies will go to 10+ seconds for one or more OSDs. This also happens outside scrub hours and it is not the same OSDs every time. The OSDs affected are HDD with DB/WAL on NVMe.

Log snippet:

"
...
2024-03-22T06:48:22.859+0000 7fb184b52700  1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fb169898700' had timed out after 15.000000954s 2024-03-22T06:48:22.859+0000 7fb185b54700  1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fb169898700' had timed out after 15.000000954s 2024-03-22T06:48:22.864+0000 7fb169898700  1 heartbeat_map clear_timeout 'OSD::osd_op_tp thread 0x7fb169898700' had timed out after 15.000000954s 2024-03-22T06:48:22.864+0000 7fb169898700  0 bluestore(/var/lib/ceph/osd/ceph-112) log_latency slow operation observed for submit_transact, latency = 17.716707230s 2024-03-22T06:48:22.880+0000 7fb1748ae700  0 bluestore(/var/lib/ceph/osd/ceph-112) log_latency_fn slow operation observed for _txc_committed_kv, latency = 17.732601166s, txc = 0x55a5bcda0f00 2024-03-22T06:48:38.077+0000 7fb184b52700  1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fb169898700' had timed out after 15.000000954s 2024-03-22T06:48:38.077+0000 7fb184b52700  1 heartbeat_map is_healthy 'OSD::osd_op_tp thread 0x7fb169898700' had timed out after 15.000000954s
...
"

"
[root@dopey ~]# ceph -s
  cluster:
    id:     8ee2d228-ed21-4580-8bbf-0649f229e21d
    health: HEALTH_WARN
            1 failed cephadm daemon(s)
            Low space hindering backfill (add storage if this doesn't resolve itself): 1 pg backfill_toofull

  services:
    mon: 5 daemons, quorum lazy,jolly,happy,dopey,sleepy (age 3d)
    mgr: jolly.tpgixt(active, since 10d), standbys: dopey.lxajvk, lazy.xuhetq
    mds: 1/1 daemons up, 2 standby
    osd: 540 osds: 539 up (since 6m), 539 in (since 15h); 6250 remapped pgs

  data:
    volumes: 1/1 healthy
    pools:   15 pools, 10849 pgs
    objects: 546.35M objects, 1.1 PiB
    usage:   1.9 PiB used, 2.3 PiB / 4.2 PiB avail
    pgs:     1425479651/3163081036 objects misplaced (45.066%)
             6224 active+remapped+backfill_wait
             4516 active+clean
             67   active+clean+scrubbing
             25   active+remapped+backfilling
             16   active+clean+scrubbing+deep
             1    active+remapped+backfill_wait+backfill_toofull

  io:
    client:   117 MiB/s rd, 68 MiB/s wr, 274 op/s rd, 183 op/s wr
    recovery: 438 MiB/s, 192 objects/s
"

Anyone know what the issue might be? Given that is happens on and off with large periods of time in between with normal low latencies I think it unlikely that it is just because the cluster is busy.

Also, how come there's only a small amount of PGs doing backfill when we have such a large misplaced percentage? Can this be just from backfill reservation logjam?

Mvh.

Torkil

--
Igor Fedotov
Ceph Lead Developer

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web: https://croit.io | YouTube: https://goo.gl/PGE1Bx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux