could not find secret_id

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





Hello ceph-users:

First ,sorry my english ...


I find a bug (16.2.10 and 18.2.1), but i do not know why

test steps:

1、config all hosts contain chronyd service

2、cephadm --image quay.io/ceph/ceph:v16.2.10 bootstrap --dashboard-password-noupdate --mon-ip 10.40.10.200 --cluster-network=10.40.10.0/24 --skip-pull  --allow-overwrite --skip-monitoring-stack --allow-mismatched-release

3、add hosts

    ssh-copy-id -f -i /etc/ceph/ceph.pub root@node2

    ceph orch host add node2 10.40.10.202 --labels=_admin

    ssh-copy-id -f -i /etc/ceph/ceph.pub root@node-1

    ceph orch host add node-1 10.40.10.202 --labels=_admin

4、add osds:

    ceph orch daemon add osd node1:/dev/sda

    ceph orch daemon add osd node1:/dev/sdb

    ceph orch daemon add osd node1:/dev/sdc

    ceph orch daemon add osd node1:/dev/sdd

    ceph orch daemon add osd node2:/dev/sda

    ceph orch daemon add osd node2:/dev/sdb

    ceph orch daemon add osd node2:/dev/sdc

    ceph orch daemon add osd node2:/dev/sdd

5、ceph osd pool create test_pool 512 512 --size=3

6、disconnect node1 (which is monitor leader) cluster and public network

    ifdown enp35s0f0

ifdown enp35s0f1

7、stay overnight(more than three hours)

8、connect node1 cluster and public network

    ifup enp35s0f0

ifup enp35s0f1


then, the osds of node1 call get_auth_session_key from mon.node1 which is not new leader,get the wrong secret_id=4.

the others report error "could not find secret_id=4" more than 40 minutes




the ceph.log content of node1:


2023-12-20T09:48:30.199224+0800 mon.node1 (mon.0) 1891 : cluster [WRN] Health check failed: Failed to place 1 daemon(s) (CEPHADM_DAEMON_PLACE_FAIL)

2023-12-20T09:49:37.816260+0800 mon.node1 (mon.0) 1901 : cluster [INF] Health check cleared: CEPHADM_DAEMON_PLACE_FAIL (was: Failed to place 1 daemon(s))

2023-12-20T09:49:44.222046+0800 mon.node1 (mon.0) 1902 : cluster [WRN] Health check failed: Failed to place 1 daemon(s) (CEPHADM_DAEMON_PLACE_FAIL)

2023-12-20T09:50:00.000090+0800 mon.node1 (mon.0) 1905 : cluster [WRN] overall HEALTH_WARN Failed to place 1 daemon(s); 3 failed cephadm daemon(s); 1 pool(s) do not have an application enabled

2023-12-20T09:50:51.916464+0800 mon.node1 (mon.0) 1912 : cluster [INF] Health check cleared: CEPHADM_DAEMON_PLACE_FAIL (was: Failed to place 1 daemon(s))

2023-12-20T09:51:07.127982+0800 mon.node1 (mon.0) 1916 : cluster [WRN] Health check failed: Failed to place 1 daemon(s) (CEPHADM_DAEMON_PLACE_FAIL)

2023-12-21T01:13:55.241891+0800 osd.0 (osd.0) 3 : cluster [WRN] Monitor daemon marked osd.0 down, but it is still running

2023-12-21T01:14:05.739585+0800 mon.node1 (mon.0) 1969 : cluster [INF] mon.node1 calling monitor election

2023-12-21T01:14:05.852877+0800 mon.node2 (mon.1) 11595 : cluster [INF] mon.node2 calling monitor election

2023-12-21T01:14:05.853054+0800 mon.node-1 (mon.2) 36 : cluster [INF] mon.node-1 calling monitor election

2023-12-21T01:14:05.964608+0800 mon.node1 (mon.0) 1970 : cluster [INF] mon.node1 calling monitor election

2023-12-21T01:14:06.013934+0800 mon.node1 (mon.0) 1971 : cluster [INF] mon.node1 is new leader, mons node1,node2,node-1 in quorum (ranks 0,1,2)

2023-12-21T01:14:06.041158+0800 mon.node1 (mon.0) 1976 : cluster [INF] Health check cleared: MON_DOWN (was: 1/3 mons down, quorum node2,node-1)


the ceph.log content of node2:


2023-12-21T01:10:01.065051+0800 mon.node2 (mon.1) 11522 : cluster [WRN] Health check failed: Failed to place 1 daemon(s) (CEPHADM_DAEMON_PLACE_FAIL)

2023-12-21T01:11:11.092640+0800 mon.node2 (mon.1) 11531 : cluster [INF] Health check cleared: CEPHADM_DAEMON_PLACE_FAIL (was: Failed to place 1 daemon(s))

2023-12-21T01:11:19.197355+0800 mon.node2 (mon.1) 11532 : cluster [WRN] Health check failed: Failed to place 1 daemon(s) (CEPHADM_DAEMON_PLACE_FAIL)

2023-12-21T01:12:34.469403+0800 mon.node2 (mon.1) 11547 : cluster [INF] Health check cleared: CEPHADM_DAEMON_PLACE_FAIL (was: Failed to place 1 daemon(s))

2023-12-21T01:12:42.476793+0800 mon.node2 (mon.1) 11549 : cluster [WRN] Health check failed: Failed to place 1 daemon(s) (CEPHADM_DAEMON_PLACE_FAIL)

2023-12-21T01:13:55.234163+0800 mon.node2 (mon.1) 11572 : cluster [INF] osd.5 marked itself dead as of e91

2023-12-21T01:13:55.236348+0800 mon.node2 (mon.1) 11573 : cluster [INF] osd.4 marked itself dead as of e91

2023-12-21T01:13:55.242209+0800 mon.node2 (mon.1) 11574 : cluster [INF] osd.0 marked itself dead as of e91

2023-12-21T01:13:55.233774+0800 osd.5 (osd.5) 3 : cluster [WRN] Monitor daemon marked osd.5 down, but it is still running

2023-12-21T01:13:55.236038+0800 osd.4 (osd.4) 3 : cluster [WRN] Monitor daemon marked osd.4 down, but it is still running

2023-12-21T01:14:00.109136+0800 mon.node2 (mon.1) 11589 : cluster [WRN] Health check update: Degraded data redundancy: 1/8 objects degraded (12.500%), 1 pg degraded, 188 pgs undersized (PG_DEGRADED)

2023-12-21T01:14:00.311644+0800 mon.node2 (mon.1) 11590 : cluster [INF] osd.3 marked itself dead as of e94

2023-12-21T01:14:00.311410+0800 osd.3 (osd.3) 3 : cluster [WRN] Monitor daemon marked osd.3 down, but it is still running

2023-12-21T01:14:01.922553+0800 mgr.node1.apderh (mgr.34343) 1 : cluster [ERR] Failed to load ceph-mgr modules: k8sevents

2023-12-21T01:13:55.241891+0800 osd.0 (osd.0) 3 : cluster [WRN] Monitor daemon marked osd.0 down, but it is still running

2023-12-21T01:14:05.739585+0800 mon.node1 (mon.0) 1969 : cluster [INF] mon.node1 calling monitor election

2023-12-21T01:14:05.852877+0800 mon.node2 (mon.1) 11595 : cluster [INF] mon.node2 calling monitor election

2023-12-21T01:14:05.853054+0800 mon.node-1 (mon.2) 36 : cluster [INF] mon.node-1 calling monitor election

2023-12-21T01:14:05.964608+0800 mon.node1 (mon.0) 1970 : cluster [INF] mon.node1 calling monitor election

2023-12-21T01:14:06.013934+0800 mon.node1 (mon.0) 1971 : cluster [INF] mon.node1 is new leader, mons node1,node2,node-1 in quorum (ranks 0,1,2)

2023-12-21T01:14:06.041158+0800 mon.node1 (mon.0) 1976 : cluster [INF] Health check cleared: MON_DOWN (was: 1/3 mons down, quorum node2,node-1)



the osd log of node1:

2023-12-21T01:13:43.720+0800 7fcd27c2e640 -1 osd.0 83 heartbeat_check: no reply from 10.40.10.202:6804 osd.6 ever on either front or back, first ping sent 2023-12-21T01:13:23.148457+0800 (oldest deadline 2023-12-21T01:13:43.148457+0800)

2023-12-21T01:13:43.720+0800 7fcd27c2e640 -1 osd.0 83 heartbeat_check: no reply from 10.40.10.202:6812 osd.7 ever on either front or back, first ping sent 2023-12-21T01:13:23.148457+0800 (oldest deadline 2023-12-21T01:13:43.148457+0800)

2023-12-21T01:13:43.720+0800 7fcd27c2e640 -1 osd.0 83 heartbeat_check: no reply from 10.40.10.202:6820 osd.8 ever on either front or back, first ping sent 2023-12-21T01:13:23.148457+0800 (oldest deadline 2023-12-21T01:13:43.148457+0800)

2023-12-21T01:13:43.720+0800 7fcd27c2e640 -1 osd.0 83 heartbeat_check: no reply from 10.40.10.202:6828 osd.9 ever on either front or back, first ping sent 2023-12-21T01:13:23.148457+0800 (oldest deadline 2023-12-21T01:13:43.148457+0800)

2023-12-21T01:13:44.680+0800 7fcd2bed8640 10 cephx client: reset

2023-12-21T01:13:44.680+0800 7fcd2bed8640 10 cephx client: 0x562066bbc000 handle_response ret = 0

2023-12-21T01:13:44.680+0800 7fcd2bed8640 10 cephx client:  got initial server challenge 5fd5d9eca256ae69

2023-12-21T01:13:44.680+0800 7fcd2bed8640 10 cephx client: validate_tickets: want=53 need=0 have=53

2023-12-21T01:13:44.680+0800 7fcd2bed8640 10 cephx: validate_tickets want 53 have 0 need 53

2023-12-21T01:13:44.680+0800 7fcd2bed8640 10 cephx client: want=53 need=53 have=0

2023-12-21T01:13:44.680+0800 7fcd2bed8640 10 cephx client: build_request

2023-12-21T01:13:44.680+0800 7fcd2bed8640 20 cephx client: old ticket len=96

2023-12-21T01:13:44.680+0800 7fcd2bed8640 10 cephx client: get auth session key: client_challenge 7ca3f7c3ef30b2bf

2023-12-21T01:13:44.680+0800 7fcd2bed8640 10 cephx client: 0x562066bbc000 handle_response ret = 0

2023-12-21T01:13:44.680+0800 7fcd2bed8640 10 cephx client:  get_auth_session_key

2023-12-21T01:13:44.680+0800 7fcd2bed8640 10 cephx: verify_service_ticket_reply got 1 keys

2023-12-21T01:13:44.680+0800 7fcd2bed8640 10 cephx: got key for service_id auth

2023-12-21T01:13:44.680+0800 7fcd2bed8640 10 cephx: verify_service_ticket_reply got encrypted ticket

2023-12-21T01:13:44.680+0800 7fcd2bed8640 10 cephx: verify_service_ticket_reply ticket.secret_id=2

2023-12-21T01:13:44.680+0800 7fcd2bed8640 10 cephx: verify_service_ticket_reply service auth secret_id 2 session_key AQDIIINlUlSmKBAAvL5rKyyIb6vt89rxUGlsxg== validity=259200.000000

2023-12-21T01:13:44.680+0800 7fcd2bed8640 10 cephx: verify_service_ticket_reply ticket expires=2023-12-24T01:13:44.682217+0800 renew_after=2023-12-23T07:13:44.682217+0800

2023-12-21T01:13:44.680+0800 7fcd2bed8640 10 cephx client:  want=53 need=53 have=0

2023-12-21T01:13:44.680+0800 7fcd2bed8640 10 cephx client:  got connection bl 84 and extra tickets 550

2023-12-21T01:13:44.680+0800 7fcd2bed8640 10 cephx client:  got connection_secret 64 bytes

2023-12-21T01:13:44.680+0800 7fcd2bed8640 10 cephx: verify_service_ticket_reply got 3 keys

2023-12-21T01:13:44.680+0800 7fcd2bed8640 10 cephx: got key for service_id mon

2023-12-21T01:13:44.680+0800 7fcd2bed8640 10 cephx: verify_service_ticket_reply ticket.secret_id=4

2023-12-21T01:13:44.680+0800 7fcd2bed8640 10 cephx: verify_service_ticket_reply service mon secret_id 4 session_key AQDIIINlWvamKBAAxOaZsuCMdgutax1n2ZRbZQ== validity=3600.000000

2023-12-21T01:13:44.680+0800 7fcd2bed8640 10 cephx: verify_service_ticket_reply ticket expires=2023-12-21T02:13:44.682256+0800 renew_after=2023-12-21T01:58:44.682256+0800

2023-12-21T01:13:44.680+0800 7fcd2bed8640 10 cephx: got key for service_id osd

2023-12-21T01:13:44.680+0800 7fcd2bed8640 10 cephx: verify_service_ticket_reply ticket.secret_id=4

2023-12-21T01:13:44.680+0800 7fcd2bed8640 10 cephx: verify_service_ticket_reply service osd secret_id 4 session_key AQDIIINlXAinKBAAOwTlViH3CJRgSvj5Kn9qtA== validity=3600.000000

2023-12-21T01:13:44.680+0800 7fcd2bed8640 10 cephx: verify_service_ticket_reply ticket expires=2023-12-21T02:13:44.682270+0800 renew_after=2023-12-21T01:58:44.682270+0800

2023-12-21T01:13:44.680+0800 7fcd2bed8640 10 cephx: got key for service_id mgr

2023-12-21T01:13:44.680+0800 7fcd2bed8640 10 cephx: verify_service_ticket_reply ticket.secret_id=4

2023-12-21T01:13:44.680+0800 7fcd2bed8640 10 cephx: verify_service_ticket_reply service mgr secret_id 4 session_key AQDIIINlUiGnKBAAr6WBaSawzVyDoqEs8cENdg== validity=3600.000000

2023-12-21T01:13:44.680+0800 7fcd2bed8640 10 cephx: verify_service_ticket_reply ticket expires=2023-12-21T02:13:44.682286+0800 renew_after=2023-12-21T01:58:44.682286+0800

2023-12-21T01:13:44.680+0800 7fcd2bed8640 10 cephx client:  got extra service_tickets

2023-12-21T01:13:44.680+0800 7fcd2bed8640 10 cephx: validate_tickets want 53 have 53 need 0

2023-12-21T01:13:44.680+0800 7fcd2bed8640 10 cephx: validate_tickets want 53 have 53 need 0

2023-12-21T01:13:44.680+0800 7fcd2bed8640 20 cephx client: need_tickets: want=53 have=53 need=0

2023-12-21T01:13:44.680+0800 7fcd2bed8640 -1 monclient: _check_auth_rotating possible clock skew, rotating keys expired way too early (before 2023-12-21T00:13:44.682376+0800)

2023-12-21T01:13:44.680+0800 7fcd2bed8640 10 cephx client: build_rotating_request

2023-12-21T01:13:44.719+0800 7fcd27c2e640 -1 osd.0 83 heartbeat_check: no reply from 10.40.10.201:6804 osd.1 ever on either front or back, first ping sent 2023-12-21T01:13:23.148457+0800 (oldest deadline 2023-12-21T01:13:43.148457+0800)

2023-12-21T01:13:44.719+0800 7fcd27c2e640 -1 osd.0 83 heartbeat_check: no reply from 10.40.10.201:6812 osd.2 ever on either front or back, first ping sent 2023-12-21T01:13:23.148457+0800 (oldest deadline 2023-12-21T01:13:43.148457+0800)

2023-12-21T01:13:44.719+0800 7fcd27c2e640 -1 osd.0 83 heartbeat_check: no reply from 10.40.10.200:6814 osd.3 ever on either front or back, first ping sent 2023-12-21T01:13:23.148457+0800 (oldest deadline 2023-12-21T01:13:43.148457+0800)




the osd log of node2:


2023-12-21T01:13:51.255+0800 7fa573a6d640 10 cephx: validate_tickets want 53 have 53 need 0

2023-12-21T01:13:51.255+0800 7fa573a6d640 20 cephx client: need_tickets: want=53 have=53 need=0

2023-12-21T01:13:51.255+0800 7fa573a6d640 10 auth: dump_rotating:

2023-12-21T01:13:51.255+0800 7fa573a6d640 10 auth:  id 17 AQBr+4Jlx7z6GRAASZPpy2tmX0O0YI+hwmtuHg== expires 2023-12-21T00:34:21.152105+0800

2023-12-21T01:13:51.256+0800 7fa573a6d640 10 auth:  id 18 AQCBCYNlibFfLxAAHEf4/1kldHQcXRqZE7fkcA== expires 2023-12-21T01:34:25.794799+0800

2023-12-21T01:13:51.256+0800 7fa573a6d640 10 auth:  id 19 AQCOF4NlI5MZCxAAKOfv6wvo7f/HZbDIs20I/g== expires 2023-12-21T02:34:25.794799+0800

2023-12-21T01:13:52.170+0800 7fa581293640 20 AuthRegistry(0x7ffc86c1e7b0) get_handler peer_type 4 method 2 cluster_methods [2] service_methods [2] client_methods [2]

2023-12-21T01:13:52.170+0800 7fa581293640 10 cephx: verify_authorizer decrypted service osd secret_id=4

2023-12-21T01:13:52.170+0800 7fa581293640  0 auth: could not find secret_id=4

2023-12-21T01:13:52.170+0800 7fa581293640 10 auth: dump_rotating:

2023-12-21T01:13:52.170+0800 7fa581293640 10 auth:  id 17 AQBr+4Jlx7z6GRAASZPpy2tmX0O0YI+hwmtuHg== expires 2023-12-21T00:34:21.152105+0800

2023-12-21T01:13:52.170+0800 7fa581293640 10 auth:  id 18 AQCBCYNlibFfLxAAHEf4/1kldHQcXRqZE7fkcA== expires 2023-12-21T01:34:25.794799+0800

2023-12-21T01:13:52.170+0800 7fa581293640 10 auth:  id 19 AQCOF4NlI5MZCxAAKOfv6wvo7f/HZbDIs20I/g== expires 2023-12-21T02:34:25.794799+0800

2023-12-21T01:13:52.170+0800 7fa581293640  0 cephx: verify_authorizer could not get service secret for service osd secret_id=4

2023-12-21T01:13:52.256+0800 7fa573a6d640 10 cephx: validate_tickets want 53 have 53 need 0

2023-12-21T01:13:52.256+0800 7fa573a6d640 20 cephx client: need_tickets: want=53 have=53 need=0



the mon log of node1:


2023-12-21T01:13:44.424+0800 7fa7976e9640 10 start_session entity_name=mgr.node1.apderh global_id=14184 is_new_global_id=0

2023-12-21T01:13:44.424+0800 7fa7976e9640 10 cephx server mgr.node1.apderh: start_session server_challenge 524ce5eea3e4dd75

2023-12-21T01:13:44.424+0800 7fa7976e9640 10 cephx server mgr.node1.apderh: handle_request get_auth_session_key for mgr.node1.apderh

2023-12-21T01:13:44.424+0800 7fa7976e9640 20 cephx server mgr.node1.apderh:  checking key: req.key=f41952c0f02e6c0f expected_key=f41952c0f02e6c0f

2023-12-21T01:13:44.424+0800 7fa7976e9640 20 cephx server mgr.node1.apderh:  checking old_ticket: secret_id=2 len=112, old_ticket_may_be_omitted=0

2023-12-21T01:13:44.424+0800 7fa7976e9640 20 cephx server mgr.node1.apderh:  decoded old_ticket: global_id=14184

2023-12-21T01:13:44.424+0800 7fa7976e9640 10 cephx server mgr.node1.apderh:  allowing reclaim of global_id 14184 (valid ticket presented, will encrypt new ticket)

2023-12-21T01:13:44.424+0800 7fa7976e9640 10 cephx: build_service_ticket_reply encoding 1 tickets with secret AQBmKIJlvHeYJRAAf5h+auT4c9Ilo/nlUqRJlg==

2023-12-21T01:13:44.424+0800 7fa7976e9640 10 cephx: build_service_ticket service auth secret_id 2 ticket_info.ticket.name=mgr.node1.apderh ticket.global_id 14184

2023-12-21T01:13:44.424+0800 7fa7976e9640 10 cephx keyserverdata: get_caps: name=mgr.node1.apderh

2023-12-21T01:13:44.424+0800 7fa7976e9640 10 cephx keyserverdata: get_secret: num of caps=3

2023-12-21T01:13:44.424+0800 7fa7976e9640 10 cephx server mgr.node1.apderh:  adding key for service mon

2023-12-21T01:13:44.424+0800 7fa7976e9640 10 cephx server mgr.node1.apderh:  adding key for service mds

2023-12-21T01:13:44.424+0800 7fa7976e9640 10 cephx keyserverdata: get_caps: name=mgr.node1.apderh

2023-12-21T01:13:44.424+0800 7fa7976e9640 10 cephx keyserverdata: get_secret: num of caps=3

2023-12-21T01:13:44.424+0800 7fa7976e9640 10 cephx server mgr.node1.apderh:  adding key for service osd

2023-12-21T01:13:44.424+0800 7fa7976e9640 10 cephx keyserverdata: get_caps: name=mgr.node1.apderh

2023-12-21T01:13:44.424+0800 7fa7976e9640 10 cephx keyserverdata: get_secret: num of caps=3

2023-12-21T01:13:44.424+0800 7fa7976e9640 10 cephx server mgr.node1.apderh:  adding key for service mgr

2023-12-21T01:13:44.424+0800 7fa7976e9640 10 cephx keyserverdata: get_caps: name=mgr.node1.apderh

2023-12-21T01:13:44.424+0800 7fa7976e9640 10 cephx keyserverdata: get_secret: num of caps=3

2023-12-21T01:13:44.424+0800 7fa7976e9640 10 cephx: build_service_ticket_reply encoding 4 tickets with secret AQDIIINldY1mGRAApH04rNRZtuyAUrQn5PNG7A==

2023-12-21T01:13:44.424+0800 7fa7976e9640 10 cephx: build_service_ticket service mon secret_id 4 ticket_info.ticket.name=mgr.node1.apderh ticket.global_id 14184

2023-12-21T01:13:44.424+0800 7fa7976e9640 10 cephx: build_service_ticket service mds secret_id 4 ticket_info.ticket.name=mgr.node1.apderh ticket.global_id 14184

2023-12-21T01:13:44.424+0800 7fa7976e9640 10 cephx: build_service_ticket service osd secret_id 4 ticket_info.ticket.name=mgr.node1.apderh ticket.global_id 14184

2023-12-21T01:13:44.424+0800 7fa7976e9640 10 cephx: build_service_ticket service mgr secret_id 4 ticket_info.ticket.name=mgr.node1.apderh ticket.global_id 14184

2023-12-21T01:13:44.433+0800 7fa7936e1640  0 mon.node1@0(probing) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/node1.apderh/mirror_snapshot_schedule"} v 0) v1

2023-12-21T01:13:44.433+0800 7fa7936e1640  0 log_channel(audit) log [INF] : from='mgr.14184 10.40.10.200:0/2514043697' entity='mgr.node1.apderh' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/node1.apderh/mirror_snapshot_schedule"}]: dispatch

2023-12-21T01:13:44.434+0800 7fa7936e1640  0 mon.node1@0(probing) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/node1.apderh/trash_purge_schedule"} v 0) v1

2023-12-21T01:13:44.434+0800 7fa7936e1640  0 log_channel(audit) log [INF] : from='mgr.14184 10.40.10.200:0/2514043697' entity='mgr.node1.apderh' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/node1.apderh/trash_purge_schedule"}]: dispatch



the mon log of node2:


2023-12-21T01:13:10.292+0800 7f88c0ed0640  4 rocksdb: EVENT_LOG_v1 {"time_micros": 1703092390293629, "job": 1145, "event": "table_file_deletion", "file_number": 2272}

2023-12-21T01:13:10.295+0800 7f88c0ed0640  4 rocksdb: EVENT_LOG_v1 {"time_micros": 1703092390296385, "job": 1145, "event": "table_file_deletion", "file_number": 2270}

2023-12-21T01:13:10.303+0800 7f88c0ed0640  4 rocksdb: EVENT_LOG_v1 {"time_micros": 1703092390304703, "job": 1145, "event": "table_file_deletion", "file_number": 2269}

2023-12-21T01:13:10.303+0800 7f88b66bb640  4 rocksdb: [db_impl/db_impl_compaction_flush.cc:1442] [default] Manual compaction starting

2023-12-21T01:13:10.304+0800 7f88b66bb640  4 rocksdb: [db_impl/db_impl_compaction_flush.cc:1442] [default] Manual compaction starting

2023-12-21T01:13:10.304+0800 7f88b66bb640  4 rocksdb: [db_impl/db_impl_compaction_flush.cc:1442] [default] Manual compaction starting

2023-12-21T01:13:10.304+0800 7f88b66bb640  4 rocksdb: [db_impl/db_impl_compaction_flush.cc:1442] [default] Manual compaction starting

2023-12-21T01:13:10.304+0800 7f88b66bb640  4 rocksdb: [db_impl/db_impl_compaction_flush.cc:1442] [default] Manual compaction starting

2023-12-21T01:13:13.434+0800 7f88bd6c9640  1 mon.node2@1(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 71303168 full_alloc: 71303168 kv_alloc: 876609536

2023-12-21T01:13:13.434+0800 7f88bd6c9640 20 cephx keyserver: prepare_rotating_update before: data.rotating_ver=17

2023-12-21T01:13:18.435+0800 7f88bd6c9640  1 mon.node2@1(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 71303168 full_alloc: 71303168 kv_alloc: 876609536

2023-12-21T01:13:18.435+0800 7f88bd6c9640 20 cephx keyserver: prepare_rotating_update before: data.rotating_ver=17

2023-12-21T01:13:23.436+0800 7f88bd6c9640  1 mon.node2@1(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 71303168 full_alloc: 71303168 kv_alloc: 876609536

2023-12-21T01:13:23.436+0800 7f88bd6c9640 20 cephx keyserver: prepare_rotating_update before: data.rotating_ver=17

2023-12-21T01:13:28.437+0800 7f88bd6c9640  1 mon.node2@1(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 71303168 full_alloc: 71303168 kv_alloc: 876609536

2023-12-21T01:13:28.437+0800 7f88bd6c9640 20 cephx keyserver: prepare_rotating_update before: data.rotating_ver=17

2023-12-21T01:13:32.438+0800 7f88baec4640  0 mon.node2@1(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/node2.rnirud/trash_purge_schedule"} v 0) v1

2023-12-21T01:13:32.438+0800 7f88baec4640  0 log_channel(audit) log [INF] : from='mgr.14252 10.40.10.201:0/357060750' entity='mgr.node2.rnirud' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/node2.rnirud/trash_purge_schedule"}]: dispatch

2023-12-21T01:13:32.440+0800 7f88baec4640  0 mon.node2@1(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/node2.rnirud/mirror_snapshot_schedule"} v 0) v1

2023-12-21T01:13:32.440+0800 7f88baec4640  0 log_channel(audit) log [INF] : from='mgr.14252 10.40.10.201:0/357060750' entity='mgr.node2.rnirud' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/node2.rnirud/mirror_snapshot_schedule"}]: dispatch

2023-12-21T01:13:33.438+0800 7f88bd6c9640  1 mon.node2@1(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 71303168 full_alloc: 71303168 kv_alloc: 876609536

2023-12-21T01:13:33.438+0800 7f88bd6c9640 20 cephx keyserver: prepare_rotating_update before: data.rotating_ver=17

2023-12-21T01:13:38.438+0800 7f88bd6c9640  1 mon.node2@1(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 71303168 full_alloc: 71303168 kv_alloc: 876609536

2023-12-21T01:13:38.438+0800 7f88bd6c9640 20 cephx keyserver: prepare_rotating_update before: data.rotating_ver=17

2023-12-21T01:13:43.439+0800 7f88bd6c9640  1 mon.node2@1(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 71303168 full_alloc: 71303168 kv_alloc: 876609536

2023-12-21T01:13:43.439+0800 7f88bd6c9640 20 cephx keyserver: prepare_rotating_update before: data.rotating_ver=17

2023-12-21T01:13:48.440+0800 7f88bd6c9640  1 mon.node2@1(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 71303168 full_alloc: 71303168 kv_alloc: 876609536

2023-12-21T01:13:48.440+0800 7f88bd6c9640 20 cephx keyserver: prepare_rotating_update before: data.rotating_ver=17

2023-12-21T01:13:51.142+0800 7f88baec4640  0 mon.node2@1(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1

2023-12-21T01:13:51.143+0800 7f88baec4640  0 log_channel(audit) log [DBG] : from='mgr.14252 10.40.10.201:0/357060750' entity='mgr.node2.rnirud' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch

2023-12-21T01:13:51.144+0800 7f88baec4640  0 mon.node2@1(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1

2023-12-21T01:13:51.144+0800 7f88baec4640  0 log_channel(audit) log [DBG] : from='mgr.14252 10.40.10.201:0/357060750' entity='mgr.node2.rnirud' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch

2023-12-21T01:13:51.145+0800 7f88baec4640  0 mon.node2@1(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1

2023-12-21T01:13:51.145+0800 7f88baec4640  0 log_channel(audit) log [INF] : from='mgr.14252 10.40.10.201:0/357060750' entity='mgr.node2.rnirud' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch

2023-12-21T01:13:52.247+0800 7f88baec4640  0 mon.node2@1(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.node1}] v 0) v1

2023-12-21T01:13:52.260+0800 7f88b96c1640  0 log_channel(audit) log [INF] : from='mgr.14252 10.40.10.201:0/357060750' entity='mgr.node2.rnirud'

2023-12-21T01:13:53.440+0800 7f88bd6c9640  1 mon.node2@1(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 71303168 full_alloc: 71303168 kv_alloc: 876609536

2023-12-21T01:13:53.440+0800 7f88bd6c9640 20 cephx keyserver: prepare_rotating_update before: data.rotating_ver=17

2023-12-21T01:13:55.187+0800 7f88beecc640 10 start_session entity_name=osd.0 global_id=14205 is_new_global_id=0

2023-12-21T01:13:55.187+0800 7f88beecc640 10 cephx server osd.0: start_session server_challenge dba982c2bced072d

2023-12-21T01:13:55.187+0800 7f88bf6cd640 10 start_session entity_name=osd.3 global_id=14379 is_new_global_id=0

2023-12-21T01:13:55.187+0800 7f88bf6cd640 10 cephx server osd.3: start_session server_challenge e9db92f3f5af188c

2023-12-21T01:13:55.187+0800 7f88beecc640 10 start_session entity_name=mgr.node1.apderh global_id=14190 is_new_global_id=0

2023-12-21T01:13:55.187+0800 7f88beecc640 10 cephx server mgr.node1.apderh: start_session server_challenge b9809292ea2e62f6

2023-12-21T01:13:55.187+0800 7f88bf6cd640 10 start_session entity_name=osd.5 global_id=24253 is_new_global_id=0

2023-12-21T01:13:55.187+0800 7f88bf6cd640 10 cephx server osd.5: start_session server_challenge 1be9f89b4bd9cdbc

2023-12-21T01:13:55.188+0800 7f88b9ec2640 10 start_session entity_name=osd.4 global_id=14406 is_new_global_id=0

2023-12-21T01:13:55.188+0800 7f88b9ec2640 10 cephx server osd.4: start_session server_challenge cca4694d496c39a8

2023-12-21T01:13:55.188+0800 7f88beecc640 10 cephx server osd.0: handle_request get_auth_session_key for osd.0

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux