Re: ceph osd crush move exception

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

you're right, the choose_args are coming from the balancer, but I'm wondering why they would affect the crush move. Do you re-deploy the same hosts when resizing or do you add new hosts? Maybe this would somehow explain why the balancer choose_args could affect a move operation.


Zitat von zhengyi deng <gooddzy@xxxxxxxxx>:

Hi Eugen Block

New node added " ceph osd crush add-bucket 192.168.1.47 host " . Executing
"ceph osd crush move 192.168.1.47 root=default " caused ceph-mon to reboot.

I solved the problem because there was a choose_args configuration in
crushmap, and I removed them and it worked.

But I don't remember configuring choose_args in crushmap, maybe I've used
"ceph balancer module" which adds choose_args to crushmap?

Thanks

Eugen Block <eblock@xxxxxx> 于2022年5月5日周四 15:58写道:

Hi,

can you share your 'ceph osd tree' so it easier to understand what
might be going wrong. I didn't check the script in detail, what
exactly do you mean by extending? Do you create new hosts in a
different root of the osd tree? Do those new hosts get PGs assigned
although they're in a different root? Has this worked before and it
failed just this once? Does the failing MON recover or does it stay in
a degraded state?


Zitat von 邓政毅 <gooddzy@xxxxxxxxx>:

> Hi
>
> I have an OpenStack (pike) environment with a Ceph cluster (ceph version
> 12.2.5 luminous stable) deployed with kolla-ansible.
>
> When I was scaling the Ceph cluster, I found a read/write exception on
the
> OpenStack VM.
>
> ( kolla-ansible uses the following script when expanding Ceph
>
https://github.com/openstack/kolla/blob/pike-eol/docker/ceph/ceph-osd/extend_start.sh
> )
>
> I ran 'ceph mon status' while checking the storage and found that ceph
mon
> was constantly electing.
>
> I started to think it was a kolla issue, so I ran the script manually.
> The problem was reproduced when I executed 'ceph osd crush move
> 192.168.1.47 root=default '.
>
> I don't know if the logs of ceph mon can help to locate the problem.
>
>    -82> 2022-04-21 23:30:35.329384 7f2f13b27700  5
> mon.192.168.1.7@2(leader).paxos(paxos
> active c 83130286..83130896) is_readable = 1 - now=2022-04-21
> 23:30:35.329387 lease_expire=2022-04-21 23:30:39.793236 has v0 lc
83130896
>    -81> 2022-04-21 23:30:35.329438 7f2f13b27700  4
> mon.192.168.1.7@2(leader).mgr
> e1702 beacon from 5534841203
>    -80> 2022-04-21 23:30:35.329464 7f2f13b27700  4
> mon.192.168.1.7@2(leader).mgr
> e1702 beacon from 5534841203
>    -79> 2022-04-21 23:30:35.640375 7f2f10b21700  1 -- 192.168.1.7:6789/0
>>
> - conn(0x5560e28e3800 :6789 s=STATE_ACCEPTING pgs=0 cs=0
> l=0)._process_connection sd=219 -
>    -78> 2022-04-21 23:30:35.640745 7f2f10b21700  2 -- 192.168.1.7:6789/0
>>
> 192.168.1.13:0/1283852369 conn(0x5560e28e3800 :6789
> s=STATE_ACCEPTING_WAIT_SEQ pgs=2 cs=1 l=1).handle_connect_msg accept
write
> reply msg done
>    -77> 2022-04-21 23:30:35.640838 7f2f10b21700  2 -- 192.168.1.7:6789/0
>>
> 192.168.1.13:0/1283852369 conn(0x5560e28e3800 :6789
> s=STATE_ACCEPTING_WAIT_SEQ pgs=2 cs=1 l=1)._process_connection accept get
> newly_acked_seq 0
>    -76> 2022-04-21 23:30:35.641004 7f2f10b21700  5 -- 192.168.1.7:6789/0
>>
> 192.168.1.13:0/1283852369 conn(0x5560e28e3800 :6789
> s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=2 cs=1 l=1). rx
client.?
> seq 1 0x5560e3380000 auth(proto 0 31 bytes epoch 0) v1
>    -75> 2022-04-21 23:30:35.641116 7f2f13b27700  1 -- 192.168.1.7:6789/0
> <== client.? 192.168.1.13:0/1283852369 1 ==== auth(proto 0 31 bytes
epoch
> 0) v1 ==== 61+0+0 (535495083 0 0) 0x5560e3380000 con 0x5560e28e3800
>    -74> 2022-04-21 23:30:35.641207 7f2f13b27700  5
> mon.192.168.1.7@2(leader).paxos(paxos
> active c 83130286..83130896) is_readable = 1 - now=2022-04-21
> 23:30:35.641211 lease_expire=2022-04-21 23:30:39.793236 has v0 lc
83130896
>    -73> 2022-04-21 23:30:35.641353 7f2f13b27700  1 -- 192.168.1.7:6789/0
> --> 192.168.1.13:0/1283852369 -- mon_map magic: 0 v1 -- 0x5560e30da000
con 0
>    -72> 2022-04-21 23:30:35.641448 7f2f13b27700  2 mon.192.168.1.7@2
(leader)
> e3 send_reply 0x5560e3bd9320 0x5560e2a27200 auth_reply(proto 2 0 (0)
> Success) v1
>    -71> 2022-04-21 23:30:35.641496 7f2f13b27700  1 -- 192.168.1.7:6789/0
> --> 192.168.1.13:0/1283852369 -- auth_reply(proto 2 0 (0) Success) v1 --
> 0x5560e2a27200 con 0
>    -70> 2022-04-21 23:30:35.642447 7f2f10b21700  5 -- 192.168.1.7:6789/0
>>
> 192.168.1.13:0/1283852369 conn(0x5560e28e3800 :6789
> s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=2 cs=1 l=1). rx
client.?
> seq 2 0x5560e2a27200 auth(proto 2 32 bytes epoch 0) v1
>    -69> 2022-04-21 23:30:35.642557 7f2f13b27700  1 -- 192.168.1.7:6789/0
> <== client.? 192.168.1.13:0/1283852369 2 ==== auth(proto 2 32 bytes
epoch
> 0) v1 ==== 62+0+0 (4221912666 0 0) 0x5560e2a27200 con 0x5560e28e3800
>    -68> 2022-04-21 23:30:35.642630 7f2f13b27700  5
> mon.192.168.1.7@2(leader).paxos(paxos
> active c 83130286..83130896) is_readable = 1 - now=2022-04-21
> 23:30:35.642633 lease_expire=2022-04-21 23:30:39.793236 has v0 lc
83130896
>    -67> 2022-04-21 23:30:35.643069 7f2f13b27700  2 mon.192.168.1.7@2
(leader)
> e3 send_reply 0x5560e3bd9320 0x5560e3380000 auth_reply(proto 2 0 (0)
> Success) v1
>    -66> 2022-04-21 23:30:35.643145 7f2f13b27700  1 -- 192.168.1.7:6789/0
> --> 192.168.1.13:0/1283852369 -- auth_reply(proto 2 0 (0) Success) v1 --
> 0x5560e3380000 con 0
>    -65> 2022-04-21 23:30:35.643677 7f2f10b21700  5 -- 192.168.1.7:6789/0
>>
> 192.168.1.13:0/1283852369 conn(0x5560e28e3800 :6789
> s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=2 cs=1 l=1). rx
client.?
> seq 3 0x5560e3380000 auth(proto 2 165 bytes epoch 0) v1
>    -64> 2022-04-21 23:30:35.643771 7f2f13b27700  1 -- 192.168.1.7:6789/0
> <== client.? 192.168.1.13:0/1283852369 3 ==== auth(proto 2 165 bytes
epoch
> 0) v1 ==== 195+0+0 (4252124022 0 0) 0x5560e3380000 con 0x5560e28e3800
>    -63> 2022-04-21 23:30:35.643829 7f2f13b27700  5
> mon.192.168.1.7@2(leader).paxos(paxos
> active c 83130286..83130896) is_readable = 1 - now=2022-04-21
> 23:30:35.643832 lease_expire=2022-04-21 23:30:39.793236 has v0 lc
83130896
>    -62> 2022-04-21 23:30:35.644448 7f2f13b27700  2 mon.192.168.1.7@2
(leader)
> e3 send_reply 0x5560e3bd9320 0x5560e2a27200 auth_reply(proto 2 0 (0)
> Success) v1
>    -61> 2022-04-21 23:30:35.644499 7f2f13b27700  1 -- 192.168.1.7:6789/0
> --> 192.168.1.13:0/1283852369 -- auth_reply(proto 2 0 (0) Success) v1 --
> 0x5560e2a27200 con 0
>    -60> 2022-04-21 23:30:35.645154 7f2f10b21700  1 -- 192.168.1.7:6789/0
>>
> 192.168.1.13:0/1283852369 conn(0x5560e28e3800 :6789 s=STATE_OPEN pgs=2
cs=1
> l=1).read_bulk peer close file descriptor 219
>    -59> 2022-04-21 23:30:35.645219 7f2f10b21700  1 -- 192.168.1.7:6789/0
>>
> 192.168.1.13:0/1283852369 conn(0x5560e28e3800 :6789 s=STATE_OPEN pgs=2
cs=1
> l=1).read_until read failed
>    -58> 2022-04-21 23:30:35.645242 7f2f10b21700  1 -- 192.168.1.7:6789/0
>>
> 192.168.1.13:0/1283852369 conn(0x5560e28e3800 :6789 s=STATE_OPEN pgs=2
cs=1
> l=1).process read tag failed
>    -57> 2022-04-21 23:30:35.645297 7f2f10b21700  1 -- 192.168.1.7:6789/0
>>
> 192.168.1.13:0/1283852369 conn(0x5560e28e3800 :6789 s=STATE_OPEN pgs=2
cs=1
> l=1).fault on lossy channel, failing
>    -56> 2022-04-21 23:30:35.645348 7f2f10b21700  2 -- 192.168.1.7:6789/0
>>
> 192.168.1.13:0/1283852369 conn(0x5560e28e3800 :6789 s=STATE_OPEN pgs=2
cs=1
> l=1)._stop
>    -55> 2022-04-21 23:30:36.274516 7f2f10320700  5 -- 192.168.1.7:6789/0
>>
> 192.168.1.11:6789/0 conn(0x5560e1e08800 :-1
> s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=958975211 cs=1 l=0). rx
> mon.3 seq 38735529 0x5560e329b440 mon_health( service 1 op tell e 0 r 0
) v1
>    -54> 2022-04-21 23:30:36.274662 7f2f13b27700  1 -- 192.168.1.7:6789/0
> <== mon.3 192.168.1.11:6789/0 38735529 ==== mon_health( service 1 op
tell e
> 0 r 0 ) v1 ==== 108+0+0 (1769890505 0 0) 0x5560e329b440 con
0x5560e1e08800
>    -53> 2022-04-21 23:30:36.307747 7f2f10b21700  5 -- 192.168.1.7:6789/0
>>
> 192.168.1.13:6789/0 conn(0x5560e2396000 :-1
> s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=1023323427 cs=1 l=0).
rx
> mon.4 seq 1911532605 0x5560e5a9cf00 forward(mgrbeacon
> mgr.a5-9c-stor-i620-1(4f7b3bf4-15fa-4ac0-97ab-bae34c68e12d,5535700725, -,
> 0) v6 caps allow profile mgr tid 29268 con_features 2305244844532236283)
v3
>    -52> 2022-04-21 23:30:36.307882 7f2f13b27700  1 -- 192.168.1.7:6789/0
> <== mon.4 192.168.1.13:6789/0 1911532605 ==== forward(mgrbeacon
> mgr.a5-9c-stor-i620-1(4f7b3bf4-15fa-4ac0-97ab-bae34c68e12d,5535700725, -,
> 0) v6 caps allow profile mgr tid 29268 con_features 2305244844532236283)
v3
> ==== 895+0+0 (3218645601 0 0) 0x5560e5a9cf00 con 0x5560e2396000
>    -51> 2022-04-21 23:30:36.308012 7f2f13b27700  5
> mon.192.168.1.7@2(leader).paxos(paxos
> active c 83130286..83130896) is_readable = 1 - now=2022-04-21
> 23:30:36.308015 lease_expire=2022-04-21 23:30:39.793236 has v0 lc
83130896
>    -50> 2022-04-21 23:30:36.308072 7f2f13b27700  1 -- 192.168.1.7:6789/0
> --> 192.168.1.13:6789/0 -- route(no-reply tid 29268) v3 --
0x5560e329b440
> con 0
>    -49> 2022-04-21 23:30:36.308135 7f2f13b27700  4
> mon.192.168.1.7@2(leader).mgr
> e1702 beacon from 5535700725
>    -48> 2022-04-21 23:30:36.308221 7f2f13b27700  4
> mon.192.168.1.7@2(leader).mgr
> e1702 beacon from 5535700725
>    -47> 2022-04-21 23:30:36.600706 7f2f10320700  5 -- 192.168.1.7:6789/0
>>
> 192.168.1.11:6789/0 conn(0x5560e1e08800 :-1
> s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=958975211 cs=1 l=0). rx
> mon.3 seq 38735530 0x5560e5834780 forward(mon_command({"prefix": "osd
crush
> move", "args": ["root=default"], "name": "192.168.1.47"} v 0) v1 caps
allow
> * tid 124914 con_features 2305244844532236283) v3
>    -46> 2022-04-21 23:30:36.600851 7f2f13b27700  1 -- 192.168.1.7:6789/0
> <== mon.3 192.168.1.11:6789/0 38735530 ====
forward(mon_command({"prefix":
> "osd crush move", "args": ["root=default"], "name": "192.168.1.47"} v 0)
v1
> caps allow * tid 124914 con_features 2305244844532236283) v3 ==== 288+0+0
> (1352957886 0 0) 0x5560e5834780 con 0x5560e1e08800
>    -45> 2022-04-21 23:30:36.600974 7f2f10320700  5 -- 192.168.1.7:6789/0
>>
> 192.168.1.11:6789/0 conn(0x5560e1e08800 :-1
> s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=958975211 cs=1 l=0). rx
> mon.3 seq 38735531 0x5560e5a91400 forward(log(1 entries from seq 9552 at
> 2022-04-21 23:30:36.599776) v1 caps allow * tid 124915 con_features
> 2305244844532236283) v3
>    -44> 2022-04-21 23:30:36.601144 7f2f13b27700  0 mon.192.168.1.7@2
(leader)
> e3 handle_command mon_command({"prefix": "osd crush move", "args":
> ["root=default"], "name": "192.168.1.47"} v 0) v1
>    -43> 2022-04-21 23:30:36.601334 7f2f13b27700  0 log_channel(audit) log
> [INF] : from='client.5563480993 -' entity='client.admin' cmd=[{"prefix":
> "osd crush move", "args": ["root=default"], "name": "192.168.1.47"}]:
> dispatch
>    -42> 2022-04-21 23:30:36.601361 7f2f13b27700 10 log_client
> _send_to_monlog to self
>    -41> 2022-04-21 23:30:36.601378 7f2f13b27700 10 log_client  log_queue
is
> 1 last_log 272 sent 271 num 1 unsent 1 sending 1
>    -40> 2022-04-21 23:30:36.601390 7f2f13b27700 10 log_client  will send
> 2022-04-21 23:30:36.601358 mon.192.168.1.7 mon.2 192.168.1.7:6789/0 272
:
> audit [INF] from='client.5563480993 -' entity='client.admin'
> cmd=[{"prefix": "osd crush move", "args": ["root=default"], "name":
> "192.168.1.47"}]: dispatch
>    -39> 2022-04-21 23:30:36.601477 7f2f13b27700  1 -- 192.168.1.7:6789/0
> --> 192.168.1.7:6789/0 -- log(1 entries from seq 272 at 2022-04-21
> 23:30:36.601358) v1 -- 0x5560e26f0900 con 0
>    -38> 2022-04-21 23:30:36.601567 7f2f13b27700  5
> mon.192.168.1.7@2(leader).paxos(paxos
> active c 83130286..83130896) is_readable = 1 - now=2022-04-21
> 23:30:36.601570 lease_expire=2022-04-21 23:30:39.793236 has v0 lc
83130896
>    -37> 2022-04-21 23:30:36.623953 7f2f13b27700  0
> mon.192.168.1.7@2(leader).osd
> e47733 moving crush item name '192.168.1.47' to location {root=default}
>    -36> 2022-04-21 23:30:36.627511 7f2f13b27700  5 check_item_loc item
-11
> loc {root=default}
>    -35> 2022-04-21 23:30:36.627546 7f2f13b27700  2 warning: did not
specify
> location for 'host' level (levels are
>
{0=osd,1=host,2=chassis,3=rack,4=row,5=pdu,6=pod,7=room,8=datacenter,9=region,10=root})
>    -34> 2022-04-21 23:30:36.627573 7f2f13b27700  2 warning: did not
specify
> location for 'chassis' level (levels are
>
{0=osd,1=host,2=chassis,3=rack,4=row,5=pdu,6=pod,7=room,8=datacenter,9=region,10=root})
>    -33> 2022-04-21 23:30:36.627593 7f2f13b27700  2 warning: did not
specify
> location for 'rack' level (levels are
>
{0=osd,1=host,2=chassis,3=rack,4=row,5=pdu,6=pod,7=room,8=datacenter,9=region,10=root})
>    -32> 2022-04-21 23:30:36.627619 7f2f13b27700  2 warning: did not
specify
> location for 'row' level (levels are
>
{0=osd,1=host,2=chassis,3=rack,4=row,5=pdu,6=pod,7=room,8=datacenter,9=region,10=root})
>    -31> 2022-04-21 23:30:36.627642 7f2f13b27700  2 warning: did not
specify
> location for 'pdu' level (levels are
>
{0=osd,1=host,2=chassis,3=rack,4=row,5=pdu,6=pod,7=room,8=datacenter,9=region,10=root})
>    -30> 2022-04-21 23:30:36.627665 7f2f13b27700  2 warning: did not
specify
> location for 'pod' level (levels are
>
{0=osd,1=host,2=chassis,3=rack,4=row,5=pdu,6=pod,7=room,8=datacenter,9=region,10=root})
>    -29> 2022-04-21 23:30:36.627687 7f2f13b27700  2 warning: did not
specify
> location for 'room' level (levels are
>
{0=osd,1=host,2=chassis,3=rack,4=row,5=pdu,6=pod,7=room,8=datacenter,9=region,10=root})
>    -28> 2022-04-21 23:30:36.627709 7f2f13b27700  2 warning: did not
specify
> location for 'datacenter' level (levels are
>
{0=osd,1=host,2=chassis,3=rack,4=row,5=pdu,6=pod,7=room,8=datacenter,9=region,10=root})
>    -27> 2022-04-21 23:30:36.627736 7f2f13b27700  2 warning: did not
specify
> location for 'region' level (levels are
>
{0=osd,1=host,2=chassis,3=rack,4=row,5=pdu,6=pod,7=room,8=datacenter,9=region,10=root})
>    -26> 2022-04-21 23:30:36.627799 7f2f13b27700  5 adjust_item_weight -12
> weight 0
>    -25> 2022-04-21 23:30:36.627816 7f2f13b27700  5
> choose_args_adjust_item_weight -11 weight [0]
>    -24> 2022-04-21 23:30:36.627837 7f2f13b27700  5 check_item_loc item
-11
> loc {root=demo}
>    -23> 2022-04-21 23:30:36.627844 7f2f13b27700  2 warning: did not
specify
> location for 'host' level (levels are
>
{0=osd,1=host,2=chassis,3=rack,4=row,5=pdu,6=pod,7=room,8=datacenter,9=region,10=root})
>    -22> 2022-04-21 23:30:36.627864 7f2f13b27700  2 warning: did not
specify
> location for 'chassis' level (levels are
>
{0=osd,1=host,2=chassis,3=rack,4=row,5=pdu,6=pod,7=room,8=datacenter,9=region,10=root})
>    -21> 2022-04-21 23:30:36.627888 7f2f13b27700  2 warning: did not
specify
> location for 'rack' level (levels are
>
{0=osd,1=host,2=chassis,3=rack,4=row,5=pdu,6=pod,7=room,8=datacenter,9=region,10=root})
>    -20> 2022-04-21 23:30:36.627923 7f2f13b27700  2 warning: did not
specify
> location for 'row' level (levels are
>
{0=osd,1=host,2=chassis,3=rack,4=row,5=pdu,6=pod,7=room,8=datacenter,9=region,10=root})
>    -19> 2022-04-21 23:30:36.627958 7f2f13b27700  2 warning: did not
specify
> location for 'pdu' level (levels are
>
{0=osd,1=host,2=chassis,3=rack,4=row,5=pdu,6=pod,7=room,8=datacenter,9=region,10=root})
>    -18> 2022-04-21 23:30:36.627980 7f2f13b27700  2 warning: did not
specify
> location for 'pod' level (levels are
>
{0=osd,1=host,2=chassis,3=rack,4=row,5=pdu,6=pod,7=room,8=datacenter,9=region,10=root})
>    -17> 2022-04-21 23:30:36.628002 7f2f13b27700  2 warning: did not
specify
> location for 'room' level (levels are
>
{0=osd,1=host,2=chassis,3=rack,4=row,5=pdu,6=pod,7=room,8=datacenter,9=region,10=root})
>    -16> 2022-04-21 23:30:36.628024 7f2f13b27700  2 warning: did not
specify
> location for 'datacenter' level (levels are
>
{0=osd,1=host,2=chassis,3=rack,4=row,5=pdu,6=pod,7=room,8=datacenter,9=region,10=root})
>    -15> 2022-04-21 23:30:36.628046 7f2f13b27700  2 warning: did not
specify
> location for 'region' level (levels are
>
{0=osd,1=host,2=chassis,3=rack,4=row,5=pdu,6=pod,7=room,8=datacenter,9=region,10=root})
>    -14> 2022-04-21 23:30:36.628093 7f2f13b27700  5 insert_item item -11
> weight 0 name 192.168.1.47 loc {root=default}
>    -13> 2022-04-21 23:30:36.628131 7f2f13b27700  2 warning: did not
specify
> location for 'host' level (levels are
>
{0=osd,1=host,2=chassis,3=rack,4=row,5=pdu,6=pod,7=room,8=datacenter,9=region,10=root})
>    -12> 2022-04-21 23:30:36.628155 7f2f13b27700  2 warning: did not
specify
> location for 'chassis' level (levels are
>
{0=osd,1=host,2=chassis,3=rack,4=row,5=pdu,6=pod,7=room,8=datacenter,9=region,10=root})
>    -11> 2022-04-21 23:30:36.628177 7f2f13b27700  2 warning: did not
specify
> location for 'rack' level (levels are
>
{0=osd,1=host,2=chassis,3=rack,4=row,5=pdu,6=pod,7=room,8=datacenter,9=region,10=root})
>    -10> 2022-04-21 23:30:36.628200 7f2f13b27700  2 warning: did not
specify
> location for 'row' level (levels are
>
{0=osd,1=host,2=chassis,3=rack,4=row,5=pdu,6=pod,7=room,8=datacenter,9=region,10=root})
>     -9> 2022-04-21 23:30:36.628222 7f2f13b27700  2 warning: did not
specify
> location for 'pdu' level (levels are
>
{0=osd,1=host,2=chassis,3=rack,4=row,5=pdu,6=pod,7=room,8=datacenter,9=region,10=root})
>     -8> 2022-04-21 23:30:36.628243 7f2f13b27700  2 warning: did not
specify
> location for 'pod' level (levels are
>
{0=osd,1=host,2=chassis,3=rack,4=row,5=pdu,6=pod,7=room,8=datacenter,9=region,10=root})
>     -7> 2022-04-21 23:30:36.628266 7f2f13b27700  2 warning: did not
specify
> location for 'room' level (levels are
>
{0=osd,1=host,2=chassis,3=rack,4=row,5=pdu,6=pod,7=room,8=datacenter,9=region,10=root})
>     -6> 2022-04-21 23:30:36.628289 7f2f13b27700  2 warning: did not
specify
> location for 'datacenter' level (levels are
>
{0=osd,1=host,2=chassis,3=rack,4=row,5=pdu,6=pod,7=room,8=datacenter,9=region,10=root})
>     -5> 2022-04-21 23:30:36.628319 7f2f13b27700  2 warning: did not
specify
> location for 'region' level (levels are
>
{0=osd,1=host,2=chassis,3=rack,4=row,5=pdu,6=pod,7=room,8=datacenter,9=region,10=root})
>     -4> 2022-04-21 23:30:36.628339 7f2f13b27700  5 insert_item adding -11
> weight 0 to bucket -1
>     -3> 2022-04-21 23:30:36.628427 7f2f13b27700  5
> adjust_item_weight_in_loc -11 weight 0 in {root=default}
>     -2> 2022-04-21 23:30:36.628443 7f2f13b27700  5
> adjust_item_weight_in_loc -11 diff 0 in bucket -1
>     -1> 2022-04-21 23:30:36.628448 7f2f13b27700  5 adjust_item_weight -1
> weight 4482660
>      0> 2022-04-21 23:30:36.647294 7f2f13b27700 -1 *** Caught signal
> (Segmentation fault) **
>  in thread 7f2f13b27700 thread_name:ms_dispatch
>
>  ceph version 12.2.5 (cad919881333ac92274171586c827e01f554a70a) luminous
> (stable)
>  1: (()+0x8f4d31) [0x5560d751ed31]
>  2: (()+0xf6d0) [0x7f2f1ce836d0]
>  3: (CrushWrapper::device_class_clone(int, int, std::map<int,
std::map<int,
> int, std::less<int>, std::allocator<std::pair<int const, int> > >,
> std::less<int>, std::allocator<std::pair<int const, std::map<int, int,
> std::less<int>, std::allocator<std::pair<int const, int> > > > > >
const&,
> std::set<int, std::less<int>, std::allocator<int> > const&, int*,
> std::map<int, std::map<int, std::vector<int, std::allocator<int> >,
> std::less<int>, std::allocator<std::pair<int const, std::vector<int,
> std::allocator<int> > > > >, std::less<int>, std::allocator<std::pair<int
> const, std::map<int, std::vector<int, std::allocator<int> >,
> std::less<int>, std::allocator<std::pair<int const, std::vector<int,
> std::allocator<int> > > > > > > >*)+0xa87) [0x5560d7496eb7]
>  4: (CrushWrapper::populate_classes(std::map<int, std::map<int, int,
> std::less<int>, std::allocator<std::pair<int const, int> > >,
> std::less<int>, std::allocator<std::pair<int const, std::map<int, int,
> std::less<int>, std::allocator<std::pair<int const, int> > > > > >
> const&)+0x1cf) [0x5560d74974bf]
>  5: (CrushWrapper::rebuild_roots_with_classes()+0xfe) [0x5560d749766e]
>  6: (CrushWrapper::insert_item(CephContext*, int, float, std::string,
> std::map<std::string, std::string, std::less<std::string>,
> std::allocator<std::pair<std::string const, std::string> > >
const&)+0x78f)
> [0x5560d74993af]
>  7: (CrushWrapper::move_bucket(CephContext*, int, std::map<std::string,
> std::string, std::less<std::string>, std::allocator<std::pair<std::string
> const, std::string> > > const&)+0xc1) [0x5560d7499eb1]
>  8: (OSDMonitor::prepare_command_impl(boost::intrusive_ptr<MonOpRequest>,
> std::map<std::string, boost::variant<std::string, bool, long, double,
> std::vector<std::string, std::allocator<std::string> >, std::vector<long,
> std::allocator<long> >, std::vector<double, std::allocator<double> > >,
> std::less<std::string>, std::allocator<std::pair<std::string const,
> boost::variant<std::string, bool, long, double, std::vector<std::string,
> std::allocator<std::string> >, std::vector<long, std::allocator<long> >,
> std::vector<double, std::allocator<double> > > > > >&)+0x4dd2)
> [0x5560d7163f82]
>  9:
(OSDMonitor::prepare_command(boost::intrusive_ptr<MonOpRequest>)+0x647)
> [0x5560d717ed57]
>  10:
(OSDMonitor::prepare_update(boost::intrusive_ptr<MonOpRequest>)+0x39e)
> [0x5560d717f4be]
>  11: (PaxosService::dispatch(boost::intrusive_ptr<MonOpRequest>)+0xaf8)
> [0x5560d710b3e8]
>  12: (Monitor::handle_command(boost::intrusive_ptr<MonOpRequest>)+0x1d5b)
> [0x5560d6fe633b]
>  13: (Monitor::dispatch_op(boost::intrusive_ptr<MonOpRequest>)+0x919)
> [0x5560d6febee9]
>  14: (Monitor::_ms_dispatch(Message*)+0x7eb) [0x5560d6fed16b]
>  15: (Monitor::handle_forward(boost::intrusive_ptr<MonOpRequest>)+0xa8d)
> [0x5560d6feea7d]
>  16: (Monitor::dispatch_op(boost::intrusive_ptr<MonOpRequest>)+0xdbd)
> [0x5560d6fec38d]
>  17: (Monitor::_ms_dispatch(Message*)+0x7eb) [0x5560d6fed16b]
>  18: (Monitor::ms_dispatch(Message*)+0x23) [0x5560d70192d3]
>  19: (DispatchQueue::entry()+0x792) [0x5560d74ca0d2]
>  20: (DispatchQueue::DispatchThread::entry()+0xd) [0x5560d72c349d]
>  21: (()+0x7e25) [0x7f2f1ce7be25]
>  22: (clone()+0x6d) [0x7f2f1a286bad]
>  NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed
> to interpret this.
>
>
> Thanks!
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux