Re: _delete_some new onodes has appeared since PG removal started

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We are also start detect wrong marks

2021-04-22 00:02:11.245 7faccf703700  0 osd.22 pg_epoch: 271687 pg[17.739( v 270365'95284021 (270347'95280922,270365'95284021] lb MIN (bitwise) local-lis/les=269695/269696 n=3488450
 ec=51990/51987 lis/c 271642/271615 les/c/f 271643/271616/0 271644/271653/271653) [702,655,39] r=-1 lpr=271653 DELETING pi=[269695,271653)/3 crt=270365'95284021 lcod 270365'95284020
 unknown NOTIFY mbc={}] _delete_some additional unexpected onode list (new onodes has appeared since PG removal started[#17:9ce00000::::head#]
2021-04-22 00:02:16.846 7faccf703700  0 bluestore(/var/lib/ceph/osd/ceph-22) log_latency slow operation observed for submit_transact, latency = 5.59719s
2021-04-22 00:02:16.846 7face5be2700  0 bluestore(/var/lib/ceph/osd/ceph-22) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.59982s, txc = 0x259aa700
2021-04-22 00:02:16.962 7face13d9700  0 log_channel(cluster) log [WRN] : Monitor daemon marked osd.22 down, but it is still running


2021-04-22 00:02:16.962 7face13d9700  0 log_channel(cluster) log [DBG] : map e271689 wrongly marked me down at e271688
2021-04-21 22:36:07.299 7f6e0b153700  0 bluestore(/var/lib/ceph/osd/ceph-23) log_latency slow operation observed for submit_transact, latency = 58.6702s
2021-04-21 22:36:07.303 7f6e24638700  0 bluestore(/var/lib/ceph/osd/ceph-23) log_latency_fn slow operation observed for _txc_committed_kv, latency = 58.6741s, txc = 0xb99aea00
2021-04-21 22:36:07.483 7f6e1fe2f700  0 log_channel(cluster) log [WRN] : Monitor daemon marked osd.23 down, but it is still running
2021-04-21 22:36:07.483 7f6e1fe2f700  0 log_channel(cluster) log [DBG] : map e270669 wrongly marked me down at e270667

Recently I was created [1], seems some of reproducer
I was increased delete_sleep, again


[1] https://tracker.ceph.com/issues/50297



k

> On 21 Apr 2021, at 18:26, Dan van der Ster <dan@xxxxxxxxxxxxxx> wrote:
> 
> hdd only. ~160k objects per PG.
> 
> The flapping is pretty rare -- we've moved hundreds of PGs today and
> only one flap. (this is with osd_heartbeat_grace =45. with the default
> 20s we had one flap per ~hour)

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux