Hi,
radosgw-admin -v
ceph version 15.2.4 (7447c15c6ff58d7fce91843b705a268a1917325c) octopus (stable)
Multisite sync was something I had working with a previous cluster and an earlier Ceph version, but it doesn't now, and I can't understand why.
If anyone with an idea of a possible cause could give me a clue I would be grateful.
I have clusters set up using Rook, but as far as I can tell, that's not a factor.
On the primary cluster, I have this:
radosgw-admin zonegroup get --rgw-zonegroup zonegroup-a
{
"id": "b115d74a-2d5f-4127-b621-0223f1e96c71",
"name": "zonegroup-a",
"api_name": "zonegroup-a",
"is_master": "true",
"endpoints": [
"http://192.168.30.8:80"
],
"hostnames": [],
"hostnames_s3website": [],
"master_zone": "024687e0-1461-4f45-9149-9e571791c2b3",
"zones": [
{
"id": "024687e0-1461-4f45-9149-9e571791c2b3",
"name": "zone-a",
"endpoints": [
"http://192.168.30.8:80"
],
"log_meta": "false",
"log_data": "true",
"bucket_index_max_shards": 11,
"read_only": "false",
"tier_type": "",
"sync_from_all": "true",
"sync_from": [],
"redirect_zone": ""
},
{
"id": "6ba0ee26-0155-48f9-b057-2803336f0d66",
"name": "zone-b",
"endpoints": [
"http://192.168.30.108:80"
],
"log_meta": "false",
"log_data": "true",
"bucket_index_max_shards": 11,
"read_only": "false",
"tier_type": "",
"sync_from_all": "true",
"sync_from": [],
"redirect_zone": ""
}
],
"placement_targets": [
{
"name": "default-placement",
"tags": [],
"storage_classes": [
"STANDARD"
]
}
],
"default_placement": "default-placement",
"realm_id": "8c38fa05-c19d-4e30-bc98-e2bc84eccb68",
"sync_policy": {
"groups": []
}
}
It's identical on the secondary (that's after a realm pull, an update of the zone-b endpoints, and a period commit), which I double-checked by piping the output to md5sum on both sides.
The system user created on the primary is
radosgw-admin user info --uid realm-a-system-user
{
...
"keys": [
{
"user": "realm-a-system-user",
"access_key": "IUs+USI5IjA8WkZPRjU=",
"secret_key": "PGRDSzRERD4lbF9AYThuLzkvW1QvL148Q147PA=="
}
...
}
The zones on both sides have these keys
radosgw-admin zone get --rgw-zone zone-a
{
...
"system_key": {
"access_key": "IUs+USI5IjA8WkZPRjU=",
"secret_key": "PGRDSzRERD4lbF9AYThuLzkvW1QvL148Q147PA=="
},
...
}
radosgw-admin zone get --rgw-zonegroup zonegroup-a --rgw-zone zone-b
{
...
"system_key": {
"access_key": "IUs+USI5IjA8WkZPRjU=",
"secret_key": "PGRDSzRERD4lbF9AYThuLzkvW1QvL148Q147PA=="
},
...
}
Yet, on the secondary
radosgw-admin sync status
realm 8c38fa05-c19d-4e30-bc98-e2bc84eccb68 (realm-a)
zonegroup b115d74a-2d5f-4127-b621-0223f1e96c71 (zonegroup-a)
zone 6ba0ee26-0155-48f9-b057-2803336f0d66 (zone-b)
metadata sync preparing for full sync
full sync: 64/64 shards
full sync: 0 entries to sync
incremental sync: 0/64 shards
metadata is behind on 64 shards
behind shards: [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63]
data sync source: 024687e0-1461-4f45-9149-9e571791c2b3 (zone-a)
syncing
full sync: 0/128 shards
incremental sync: 128/128 shards
data is caught up with source
and on the primary
radosgw-admin sync status
realm 8c38fa05-c19d-4e30-bc98-e2bc84eccb68 (realm-a)
zonegroup b115d74a-2d5f-4127-b621-0223f1e96c71 (zonegroup-a)
zone 024687e0-1461-4f45-9149-9e571791c2b3 (zone-a)
metadata sync no sync (zone is master)
2020-11-06T10:58:46.345+0000 7fa805c201c0 0 data sync zone:6ba0ee26 ERROR: failed to fetch datalog info
data sync source: 6ba0ee26-0155-48f9-b057-2803336f0d66 (zone-b)
failed to retrieve sync info: (13) Permission denied
Given that all the keys above match, that "permission denied" is a mystery to me, but it does accord with:
export AWS_ACCESS_KEY_ID="IUs+USI5IjA8WkZPRjU="
export AWS_SECRET_ACCESS_KEY="PGRDSzRERD4lbF9AYThuLzkvW1QvL148Q147PA=="
s3cmd ls --no-ssl --host-bucket= --host=192.168.30.8 # OK, but:
s3cmd ls --no-ssl --host-bucket= --host=192.168.30.108
# ERROR: S3 error: 403 (InvalidAccessKeyId)
# Although
curl -L http://192.168.30.108 # works: <?xml version="1.0" encoding="UTF-8 ...
192.168.30.108 is the external IP, but just to be certain I was hitting zone-b, I tried this also within the cluster using its internal IP
s3cmd ls --no-ssl --host-bucket= --host=10.41.157.115
# ERROR: S3 error: 403 (InvalidAccessKeyId)
This seems to be the reason it's not syncing, but why?
The user with those keys existed on the primary before the realm pull, in agreement with every procedure I have seen for setting up multisite.
Any suggestions?
Regards,
Michael
radosgw-admin -v
ceph version 15.2.4 (7447c15c6ff58d7fce91843b705a268a1917325c) octopus (stable)
Multisite sync was something I had working with a previous cluster and an earlier Ceph version, but it doesn't now, and I can't understand why.
If anyone with an idea of a possible cause could give me a clue I would be grateful.
I have clusters set up using Rook, but as far as I can tell, that's not a factor.
On the primary cluster, I have this:
radosgw-admin zonegroup get --rgw-zonegroup zonegroup-a
{
"id": "b115d74a-2d5f-4127-b621-0223f1e96c71",
"name": "zonegroup-a",
"api_name": "zonegroup-a",
"is_master": "true",
"endpoints": [
"http://192.168.30.8:80"
],
"hostnames": [],
"hostnames_s3website": [],
"master_zone": "024687e0-1461-4f45-9149-9e571791c2b3",
"zones": [
{
"id": "024687e0-1461-4f45-9149-9e571791c2b3",
"name": "zone-a",
"endpoints": [
"http://192.168.30.8:80"
],
"log_meta": "false",
"log_data": "true",
"bucket_index_max_shards": 11,
"read_only": "false",
"tier_type": "",
"sync_from_all": "true",
"sync_from": [],
"redirect_zone": ""
},
{
"id": "6ba0ee26-0155-48f9-b057-2803336f0d66",
"name": "zone-b",
"endpoints": [
"http://192.168.30.108:80"
],
"log_meta": "false",
"log_data": "true",
"bucket_index_max_shards": 11,
"read_only": "false",
"tier_type": "",
"sync_from_all": "true",
"sync_from": [],
"redirect_zone": ""
}
],
"placement_targets": [
{
"name": "default-placement",
"tags": [],
"storage_classes": [
"STANDARD"
]
}
],
"default_placement": "default-placement",
"realm_id": "8c38fa05-c19d-4e30-bc98-e2bc84eccb68",
"sync_policy": {
"groups": []
}
}
It's identical on the secondary (that's after a realm pull, an update of the zone-b endpoints, and a period commit), which I double-checked by piping the output to md5sum on both sides.
The system user created on the primary is
radosgw-admin user info --uid realm-a-system-user
{
...
"keys": [
{
"user": "realm-a-system-user",
"access_key": "IUs+USI5IjA8WkZPRjU=",
"secret_key": "PGRDSzRERD4lbF9AYThuLzkvW1QvL148Q147PA=="
}
...
}
The zones on both sides have these keys
radosgw-admin zone get --rgw-zone zone-a
{
...
"system_key": {
"access_key": "IUs+USI5IjA8WkZPRjU=",
"secret_key": "PGRDSzRERD4lbF9AYThuLzkvW1QvL148Q147PA=="
},
...
}
radosgw-admin zone get --rgw-zonegroup zonegroup-a --rgw-zone zone-b
{
...
"system_key": {
"access_key": "IUs+USI5IjA8WkZPRjU=",
"secret_key": "PGRDSzRERD4lbF9AYThuLzkvW1QvL148Q147PA=="
},
...
}
Yet, on the secondary
radosgw-admin sync status
realm 8c38fa05-c19d-4e30-bc98-e2bc84eccb68 (realm-a)
zonegroup b115d74a-2d5f-4127-b621-0223f1e96c71 (zonegroup-a)
zone 6ba0ee26-0155-48f9-b057-2803336f0d66 (zone-b)
metadata sync preparing for full sync
full sync: 64/64 shards
full sync: 0 entries to sync
incremental sync: 0/64 shards
metadata is behind on 64 shards
behind shards: [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63]
data sync source: 024687e0-1461-4f45-9149-9e571791c2b3 (zone-a)
syncing
full sync: 0/128 shards
incremental sync: 128/128 shards
data is caught up with source
and on the primary
radosgw-admin sync status
realm 8c38fa05-c19d-4e30-bc98-e2bc84eccb68 (realm-a)
zonegroup b115d74a-2d5f-4127-b621-0223f1e96c71 (zonegroup-a)
zone 024687e0-1461-4f45-9149-9e571791c2b3 (zone-a)
metadata sync no sync (zone is master)
2020-11-06T10:58:46.345+0000 7fa805c201c0 0 data sync zone:6ba0ee26 ERROR: failed to fetch datalog info
data sync source: 6ba0ee26-0155-48f9-b057-2803336f0d66 (zone-b)
failed to retrieve sync info: (13) Permission denied
Given that all the keys above match, that "permission denied" is a mystery to me, but it does accord with:
export AWS_ACCESS_KEY_ID="IUs+USI5IjA8WkZPRjU="
export AWS_SECRET_ACCESS_KEY="PGRDSzRERD4lbF9AYThuLzkvW1QvL148Q147PA=="
s3cmd ls --no-ssl --host-bucket= --host=192.168.30.8 # OK, but:
s3cmd ls --no-ssl --host-bucket= --host=192.168.30.108
# ERROR: S3 error: 403 (InvalidAccessKeyId)
# Although
curl -L http://192.168.30.108 # works: <?xml version="1.0" encoding="UTF-8 ...
192.168.30.108 is the external IP, but just to be certain I was hitting zone-b, I tried this also within the cluster using its internal IP
s3cmd ls --no-ssl --host-bucket= --host=10.41.157.115
# ERROR: S3 error: 403 (InvalidAccessKeyId)
This seems to be the reason it's not syncing, but why?
The user with those keys existed on the primary before the realm pull, in agreement with every procedure I have seen for setting up multisite.
Any suggestions?
Regards,
Michael
CONFIDENTIALITY
This e-mail message and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail message, you are hereby notified that any dissemination, distribution or copying of this e-mail message, and any attachments thereto, is strictly prohibited. If you have received this e-mail message in error, please immediately notify the sender and permanently delete the original and any copies of this email and any prints thereof.
ABSENT AN EXPRESS STATEMENT TO THE CONTRARY HEREINABOVE, THIS E-MAIL IS NOT INTENDED AS A SUBSTITUTE FOR A WRITING. Notwithstanding the Uniform Electronic Transactions Act or the applicability of any other law of similar substance and effect, absent an express statement to the contrary hereinabove, this e-mail message its contents, and any attachments hereto are not intended to represent an offer or acceptance to enter into a contract and are not otherwise intended to bind the sender, Sanmina Corporation (or any of its subsidiaries), or any other person or entity.
Attachment:
smime.p7s
Description: S/MIME Cryptographic Signature
_______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx