Just one addition from my test, I believe I misinterpreted my results
because my test image was named "test" and the client "TEST", so the
rbd_id.<IMAGE> is indeed upper case for an image that has an
upper-case name. So forget my comment about that.
Another question though: does the image you're trying to map actually
contain the object_prefix you have in your caps? Can you paste the
output of 'rbd info hdb_backup/VCT'?
Zitat von Thomas Schneider <74cmonty@xxxxxxxxx>:
Actually I didn't try other caps.
The setup of RBD images and authorizations is automised with a bash
script that worked in the past w/o issues.
I need to understand the root cause in order to adapt the script accordingly.
Am 23.02.2023 um 17:55 schrieb Eugen Block:
And did you already try the other caps? Do those work?
Zitat von Thomas Schneider <74cmonty@xxxxxxxxx>:
Confirmed.
# ceph versions
{
"mon": {
"ceph version 14.2.22
(877fa256043e4743620f4677e72dee5e738d1226) nautilus (stable)": 3
},
"mgr": {
"ceph version 14.2.22
(877fa256043e4743620f4677e72dee5e738d1226) nautilus (stable)": 3
},
"osd": {
"ceph version 14.2.22
(877fa256043e4743620f4677e72dee5e738d1226) nautilus (stable)": 437
},
"mds": {
"ceph version 14.2.22
(877fa256043e4743620f4677e72dee5e738d1226) nautilus (stable)": 7
},
"overall": {
"ceph version 14.2.22
(877fa256043e4743620f4677e72dee5e738d1226) nautilus (stable)": 450
}
}
Am 23.02.2023 um 17:33 schrieb Eugen Block:
And the ceph cluster has the same version? ‚ceph versions‘ shows
all daemons. If the cluster is also 14.2.X the caps should work
with lower-case rbd_id. Can you confirm?
Zitat von Thomas Schneider <74cmonty@xxxxxxxxx>:
This is
# ceph --version
ceph version 14.2.22 (877fa256043e4743620f4677e72dee5e738d1226)
nautilus (stable)
Am 23.02.2023 um 16:47 schrieb Eugen Block:
Which ceph version is this? In a Nautilus cluster it works for
me with the lower-case rbd_id, in Pacific it doesn't. I don't
have an Octopus cluster at hand.
Zitat von Eugen Block <eblock@xxxxxx>:
I tried to recreate this restrictive client access, one thing
is that the rbd_id is in lower-case. I created a test client
named "TEST":
storage01:~ # rados -p pool ls | grep -vE
"5473cdeb5c62c|1f553ba0f6222" | grep test
rbd_id.test
But after adding all necessary caps I'm still not allowed to
get the image info:
client:~ # rbd -p pool info test --id TEST --keyring
/etc/ceph/ceph.client.TEST.keyring
2023-02-23T16:35:16.740+0100 7faebaffd700 -1
librbd::mirror::GetInfoRequest: 0x556072a66560
handle_get_mirror_image: failed to retrieve mirroring state:
(1) Operation not permitted
rbd: info: (1) Operation not permitted
And I don't have rbd-mirror enabled in this cluster, so that's
kind of strange... I'll try to find out which other caps it
requires. I already disabled all image features but to no avail.
Zitat von Thomas Schneider <74cmonty@xxxxxxxxx>:
I'll delete existing authentication and its caps "VCT" and
recreate it.
Just to be sure: there's no ingress communication to the
client (from Ceph server)?
Am 23.02.2023 um 16:01 schrieb Eugen Block:
For rbd commands you don't specify the "client" prefix for
the --id parameter, just the client name, in your case
"VCT". Your second approach shows a different error message,
so it can connect with "VCT" successfully, but the
permissions seem not to be sufficient. Those caps look very
restrictive, not sure which prevent the map command though.
Zitat von Thomas Schneider <74cmonty@xxxxxxxxx>:
Hm... I'm not sure about the correct rbd command syntax,
but I thought it's correct.
Anyway, using a different ID fails, too:
# rbd map hdb_backup/VCT --id client.VCT --keyring
/etc/ceph/ceph.client.VCT.keyring
rbd: couldn't connect to the cluster!
# rbd map hdb_backup/VCT --id VCT --keyring
/etc/ceph/ceph.client.VCT.keyring
2023-02-23T15:46:16.848+0100 7f222d19d700 -1
librbd::image::GetMetadataRequest: 0x7f220c001ef0
handle_metadata_list: failed to retrieve image metadata:
(1) Operation not permitted
2023-02-23T15:46:16.848+0100 7f222d19d700 -1
librbd::image::RefreshRequest: failed to retrieve pool
metadata: (1) Operation not permitted
2023-02-23T15:46:16.848+0100 7f222d19d700 -1
librbd::image::OpenRequest: failed to refresh image: (1)
Operation not permitted
2023-02-23T15:46:16.848+0100 7f222c99c700 -1
librbd::ImageState: 0x5569d8a16ba0 failed to open image:
(1) Operation not permitted
rbd: error opening image VCT: (1) Operation not permitted
Am 23.02.2023 um 15:30 schrieb Eugen Block:
You don't specify which client in your rbd command:
rbd map hdb_backup/VCT --id client --keyring
/etc/ceph/ceph.client.VCT.keyring
Have you tried this (not sure about upper-case client
names, haven't tried that)?
rbd map hdb_backup/VCT --id VCT --keyring
/etc/ceph/ceph.client.VCT.keyring
Zitat von Thomas Schneider <74cmonty@xxxxxxxxx>:
Hello,
I'm trying to mount RBD using rbd map, but I get this
error message:
# rbd map hdb_backup/VCT --id client --keyring
/etc/ceph/ceph.client.VCT.keyring
rbd: couldn't connect to the cluster!
Checking on Ceph server the required permission for
relevant keyring exists:
# ceph-authtool -l /etc/ceph/ceph.client.VCT.keyring
[client.VCT]
key = AQBj3LZjNGn/BhAAG8IqMyH0WLKi4kTlbjiW7g==
# ceph auth get client.VCT
[client.VCT]
key = AQBj3LZjNGn/BhAAG8IqMyH0WLKi4kTlbjiW7g==
caps mon = "allow r"
caps osd = "allow rwx pool hdb_backup
object_prefix rbd_data.b768d4baac048b; allow rwx pool
hdb_backup object_prefix rbd_header.b768d4baac048b; allow
rx pool hdb_backup object_prefix rbd_id.VCT"
exported keyring for client.VCT
Can you please advise how to fix this error?
THX
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx