monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [2]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear Ladies and Gentlemen,

running teuthology integration test krbd:rbd, ceph pacific (IBM Z) the teuthology.log outputs the following error:

2021-12-15T23:31:27.175 DEBUG:teuthology.orchestra.run.m1306030:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage rbd --id 0 -p rbd map testimage.client.0
2021-12-15T23:31:27.210 INFO:teuthology.orchestra.run.m1306030.stderr:rbd: sysfs write failed
2021-12-15T23:31:27.211 INFO:teuthology.orchestra.run.m1306030.stderr:2021-12-15T23:31:27.189+0100 3ff8a91c900 0 -- 172.18.232.30:0/2608000554 >> [v2:172.18.232.35:3301/0,v1:172.18.232.35:6790/0] conn(0x2aa3edcc8b0 msgr2=0x2aa3edcccd0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=0).read_until injecting socket failure
2021-12-15T23:31:27.221 INFO:teuthology.orchestra.run.m1306030.stderr:2021-12-15T23:31:27.199+0100 3ff8a91c900 0 -- 172.18.232.30:0/3437601418 >> [v2:172.18.230.161:6805/12551,v1:172.18.230.161:6807/12551] conn(0x3ff60008900 msgr2=0x3ff6000ad50 crc :-1 s=STATE_CONNECTION_ESTABLISHED l=1).read_until injecting socket failure
2021-12-15T23:31:27.223 INFO:teuthology.orchestra.run.m1306030.stderr:2021-12-15T23:31:27.199+0100 3ff8a91c900 0 -- 172.18.232.30:0/3437601418 >> [v2:172.18.230.161:6805/12551,v1:172.18.230.161:6807/12551] conn(0x3ff60008900 msgr2=0x3ff64065fc0 unknown :-1 s=STATE_CONNECTION_ESTABLISHED l=1)._try_send injecting socket failure
2021-12-15T23:31:27.429 INFO:teuthology.orchestra.run.m1306030.stdout:In some cases useful info is found in syslog - try "dmesg | tail".
2021-12-15T23:31:27.430 INFO:teuthology.orchestra.run.m1306030.stderr:rbd: map failed: (22) Invalid argument
2021-12-15T23:31:27.436 DEBUG:teuthology.orchestra.run:got remote process result: 22
2021-12-15T23:31:27.437 ERROR:teuthology.contextutil:Saw exception from nested tasks
Traceback (most recent call last):
File "/home/teuthology/src/teuthology_pacific/teuthology/contextutil.py", line 31, in nested
vars.append(enter())
File "/usr/lib/python3.8/contextlib.py", line 113, in __enter__
return next(self.gen)
File "/home/teuthworker/src/github.com_ibm-s390-cloud_ceph_c39ba7d47040c91efe2793b55ab9465a9a4ec66b/qa/tasks/rbd.py", line 303, in dev_create
remote.run(
File "/home/teuthology/src/teuthology_pacific/teuthology/orchestra/remote.py", line 509, in run
r = self._runner(client=self.ssh, name=self.shortname, **kwargs)
File "/home/teuthology/src/teuthology_pacific/teuthology/orchestra/run.py", line 455, in run
r.wait()
File "/home/teuthology/src/teuthology_pacific/teuthology/orchestra/run.py", line 161, in wait
self._raise_for_status()
File "/home/teuthology/src/teuthology_pacific/teuthology/orchestra/run.py", line 181, in _raise_for_status
raise CommandFailedError(
teuthology.exceptions.CommandFailedError: Command failed on m1306030 with status 22: 'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage rbd --id 0 -p rbd map testimage.client.0'
2021-12-15T23:31:27.437 INFO:tasks.rbd:Unloading rbd kernel module...


* The extracted ceph.conf file is:

[global]
chdir = ""
pid file = /var/run/ceph/$cluster-$name.pid
auth supported = cephx

filestore xattr use omap = true

mon clock drift allowed = 1.000

osd crush chooseleaf type = 0
auth debug = true

ms die on old message = true
ms die on bug = true

mon max pg per osd = 10000 # >= luminous
mon pg warn max object skew = 0

# disable pg_autoscaler by default for new pools
osd_pool_default_pg_autoscale_mode = off

osd pool default size = 2

mon osd allow primary affinity = true
mon osd allow pg remap = true
mon warn on legacy crush tunables = false
mon warn on crush straw calc version zero = false
mon warn on no sortbitwise = false
mon warn on osd down out interval zero = false
mon warn on too few osds = false
mon_warn_on_pool_pg_num_not_power_of_two = false
mon_warn_on_pool_no_redundancy = false
mon_allow_pool_size_one = true

osd pool default erasure code profile = "" technique=reed_sol_van k=2 m=1 ruleset-failure-domain=osd crush-failure-domain=osd

osd default data pool replay window = 5

mon allow pool delete = true

mon cluster log file level = debug
debug asserts on shutdown = true
mon health detail to clog = false
mon host = "172.18.232.35,[v2:172.18.232.35:3301,v1:172.18.232.35:6790],172.18.232.30"
mon client directed command retry = 5
ms die on skipped message = False
ms inject socket failures = 5000
fsid = 510167ca-dcce-469c-84d6-007840d05d93

[osd]
osd journal size = 100

osd scrub load threshold = 5.0
osd scrub max interval = 600

osd recover clone overlap = true
osd recovery max chunk = 1048576

osd debug shutdown = true
osd debug op order = true
osd debug verify stray on activate = true

osd open classes on start = true
osd debug pg log writeout = true

osd deep scrub update digest min age = 30

osd map max advance = 10

journal zero on create = true

filestore ondisk finisher threads = 3
filestore apply finisher threads = 3

bdev debug aio = true
osd debug misdirected ops = true
bdev async discard = True
bdev enable discard = True
bluestore allocator = bitmap
bluestore block size = 96636764160
bluestore fsck on mount = True
debug bluefs = 1/20
debug bluestore = 1/20
debug ms = 1
debug osd = 20
debug rocksdb = 4/10
mon osd backfillfull_ratio = 0.85
mon osd full ratio = 0.9
mon osd nearfull ratio = 0.8
osd failsafe full ratio = 0.95
osd objectstore = bluestore
osd shutdown pgref assert = True

[mgr]
debug ms = 1
debug mgr = 20
debug mon = 20
debug auth = 20
mon reweight min pgs per osd = 4
mon reweight min bytes per osd = 10
mgr/telemetry/nag = false

[mon]
debug ms = 1
debug mon = 20
debug paxos = 20
debug auth = 20
mon data avail warn = 5
mon mgr mkfs grace = 240
mon reweight min pgs per osd = 4
mon osd reporter subtree level = osd
mon osd prime pg temp = true
mon reweight min bytes per osd = 10

# rotate auth tickets quickly to exercise renewal paths
auth mon ticket ttl = 660 # 11m
auth service ticket ttl = 240 # 4m

# don't complain about insecure global_id in the test suite
mon_warn_on_insecure_global_id_reclaim = false
mon_warn_on_insecure_global_id_reclaim_allowed = false

[client]
rgw cache enabled = true
rgw enable ops log = true
rgw enable usage log = true
log file = /var/log/ceph/$cluster-$name.$pid.log
admin socket = /var/run/ceph/$cluster-$name.$pid.asok
rbd default features = 37
rbd default map options = ms_mode=legacy
[mon.a]
[mon.c]
[mon.b]


* The extracted commands are:

bash# ceph-authtool --create-keyring --gen-key --name=client.0 run/cluster1/ceph.client.0.keyring
creating run/cluster1/ceph.client.0.keyring

bash# ceph-authtool run/cluster1/ceph.client.0.keyring --name=client.0 --cap mon 'allow rw' --cap mgr 'allow r' --cap osd 'allow rwx' --cap mds allow

bash# ceph osd pool create rbd 8
pool 'rbd' created

bash# ceph osd pool application enable rbd rbd --yes-i-really-mean-it
enabled application 'rbd' on pool 'rbd'

bash# ceph tell osd.0 flush_pg_stats
25769803795

bash# ceph osd last-stat-seq osd.0
25769803794

bash# rbd -p rbd create --size 10240 testimage.client.0

bash# rbd map --pool rbd testimage.client.0 --id 0 --keyring run/cluster1/ceph.client.0.keyring
2022-05-12T07:28:45.994+0000 3ffa15e9430 -1 WARNING: all dangerous and experimental features are enabled.
2022-05-12T07:28:45.994+0000 3ffa15e9430 -1 WARNING: all dangerous and experimental features are enabled.
2022-05-12T07:28:45.994+0000 3ff94b9c900 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [2]
2022-05-12T07:28:45.994+0000 3ff8ffff900 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [2]
2022-05-12T07:28:45.994+0000 3ff9539d900 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [2]
rbd: couldn't connect to the cluster!

The output seems to relate the cephx auth.
ceph.conf contains
auth supported = cephx


Analyzing RH documentation
a) https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/4/html/administration_guide/ceph-user-management
b) https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/1.2.3/html/ceph_block_device/map-a-block-device

the user has to be imported (a):

ceph auth import -i /PATH/TO/KEYRING
example:
[root@mon ~]# ceph auth import -i /etc/ceph/ceph.keyring

and the mapping command has to include the key (b):
sudo rbd map --pool rbd myimage --id admin --keyring /path/to/keyring


With those commands the mapping works:

bash# ceph auth import -i run/cluster1/ceph.client.0.keyring
imported keyring

bash# rbd map --pool rbd testimage.client.0 --id 0 --keyring run/cluster1/ceph.client.0.keyring
/dev/rbd0

Could you tell me please what are the other settings for the teuthology test nodes to be applicable?
Are the integration tests outdated given the RH documentaion (seems no if the tests succeed @ pulpito.ceph.com)?



Thank you very much for help!


Best regards,
Alex
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx

[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux