Re: Public Swift yielding errors since 14.2.12

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Greetings Vladimir, 

> Do you have anything interesting in rgw debug log (debug rgw = 20) or in
> keystone log?
This is with RadosGW 14.2.14: 

2020-11-27 11:10:17.582 7f2406c6c700 5 Searching permissions for uid=anonymous 
2020-11-27 11:10:17.582 7f2406c6c700 5 Permissions for user not found 
2020-11-27 11:10:17.582 7f2406c6c700 5 Searching permissions for group=1 mask=49 
2020-11-27 11:10:17.582 7f2406c6c700 5 Permissions for group not found 
2020-11-27 11:10:17.582 7f2406c6c700 5 req 18 0.000s swift:list_bucket -- Getting permissions done for identity=rgw::auth::ThirdPartyAccountApplier(e5162d3caf094a159d00d80418f6f1c4) -> rgw::auth::SysReqApplier -> rgw::auth::LocalApplier(acct_user=anonymous, acct_name=, subuser=, perm_mask=15, is_admin=0), owner=e5162d3caf094a159d00d80418f6f1c4$anonymous, perm=0 
2020-11-27 11:10:17.582 7f2406c6c700 10 req 18 0.000s swift:list_bucket identity=rgw::auth::ThirdPartyAccountApplier(e5162d3caf094a159d00d80418f6f1c4) -> rgw::auth::SysReqApplier -> rgw::auth::LocalApplier(acct_user=anonymous, acct_name=, subuser=, perm_mask=15, is_admin=0) requested perm (type)=1, policy perm=0, user_perm_mask=1, acl perm=0 
2020-11-27 11:10:17.582 7f2406c6c700 20 op->ERRORHANDLER: err_no=-13 new_err_no=-13 
2020-11-27 11:10:17.582 7f2406c6c700 2 req 18 0.000s swift:list_bucket op status=0 
2020-11-27 11:10:17.582 7f2406c6c700 2 req 18 0.000s swift:list_bucket http status=403 
2020-11-27 11:10:17.582 7f2406c6c700 1 ====== req done req=0x7f2406c657f0 op status=0 http_status=403 latency=0s ====== 
2020-11-27 11:10:17.582 7f2406c6c700 20 process_request() returned -13 
2020-11-27 11:10:17.582 7f2406c6c700 1 civetweb: 0x55bde0d96000: 10.72.0.124 - - [27/Nov/2020:11:10:17 +0200] "GET /swift/v1/AUTH_e5162d3caf094a159d00d80418f6f1c4/publictesti4/ HTTP/1.1" 403 335 - curl/7.29.0 

> Could you provide the full ceph.conf?
Here you go: 

[client.rgw.HOSTNAME.ZONE] 
rgw_period_root_pool = ZONE.rgw.root 
delay_auth_decision = true 
rgw_swift_versioning_enabled = true 
rgw_swift_account_in_url = true 
rgw_num_rados_handles = 16 
rgw_region_root_pool = ZONE.rgw.root 
keyring = /etc/ceph/ceph.client.rgw.HOSTNAME.ZONE.keyring 
rgw_dns_name = SERVICE_FQDN 
rgw_trust_forwarded_https = true 
rgw_zone = ZONE 
log_to_syslog = true 
rgw_frontends = civetweb num_threads=4096 port=7480 
rgw_realm_root_pool = ZONE.rgw.root 
rgw_s3_auth_use_keystone = true 
rgw_keystone_api_version = 3 
rgw_realm = REALM 
rgw_zonegroup_root_pool = ZONE.rgw.root 
user = ceph 
rgw_keystone_url = https://<keystone url>:35357 
rgw_zone_root_pool = ZONE.rgw.root 
rgw_keystone_implicit_tenants = false 
rgw_s3_auth_order = local,external 
rgw_swift_url = http://<rgw specific local fqdn>:7480 
rgw_bucket_default_quota_max_size = -1 
rgw_keystone_token_cache_size = 500 
rgw_swift_enforce_content_length = true 
rgw_zonegroup = ZONEGROUP 
log_file = /var/log/ceph/radosgw.log 
rgw_bucket_default_quota_max_objects = 500000 
rgw_user_default_quota_max_size = 10995116278000 
rgw_user_default_quota_max_objects = -1 
host = HOSTNAME 
rgw_keystone_accepted_roles = object_store_user 
rgw_thread_pool_size = 4096 

> As of today, I suspect, that could be a Keystone problem talking to the new Ceph
> releases 14.2.12+ in your case and Octopus 15.2.x in my.
I'm not sure I understand this suspicion. When doing an authenticated Swift call, RadosGW reaches out to Keystone for tokens -- this can be validated with tcpdump on the Keystone port and/or logs. Conversely, when accessing a public bucket/object, RadosGW (at least testing with our current 14.2.11) can determine internally that the bucket is public and no calls to Keystone are made. So from our perspective this does seems like a regression in the RadosGW Swift internals, not with Keystone integration, but as mentioned we are happy to listen to pointers where the above config might be wrong. 

BR, 
Jukka 

> [ mailto:ceph-users-leave@xxxxxxx ]
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux