Re: RadosGW/Keystone intergration issues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello, community
Im still investigating the RGW/Keystone integration issue.

In the addition to the info below, in radosGW log-file I found
For the bucket with public URL (which fails):
2020-08-03T16:26:54.317+0000 7fd4d6c9a700 20 req 115 0s swift:list_bucket
rgw::auth::swift::DefaultStrategy: trying
rgw::auth::swift::SwiftAnonymousEngine
2020-08-03T16:26:54.317+0000 7fd4d6c9a700 20 req 115 0s swift:list_bucket
rgw::auth::swift::SwiftAnonymousEngine granted access
2020-08-03T16:26:54.317+0000 7fd4d6c9a700  2 req 115 0s swift:list_bucket
normalizing buckets and tenants
*2020-08-03T16:26:54.317+0000 7fd4d6c9a700 10 s->object=<NULL>
s->bucket=containerA*
2020-08-03T16:26:54.317+0000 7fd4d6c9a700  2 req 115 0s swift:list_bucket
init permissions
2020-08-03T16:26:54.317+0000 7fd4d6c9a700 20 get_system_obj_state:
rctx=0x7fd59fe3ab18 obj=default.rgw.meta:root:containerA
state=0x55bccaea2e20 s->prefetch_data=0
2020-08-03T16:26:54.317+0000 7fd4d6c9a700 10 cache get:
name=default.rgw.meta+root+containerA : expiry miss
2020-08-03T16:26:54.318+0000 7fd4d5c98700 10 cache put:
name=default.rgw.meta+root+containerA info.flags=0x0
2020-08-03T16:26:54.318+0000 7fd4d5c98700 10 adding
default.rgw.meta+root+containerA to cache LRU end
2020-08-03T16:26:54.318+0000 7fd4d5c98700 10 req 115 0.001000010s
init_permissions on :[]) failed, ret=-2002

For the same bucket accessing by a keystone user (from Horizon)

2020-08-03T16:24:14.853+0000 7fd4f24d1700 20 req 109 0s swift:list_bucket
rgw::auth::keystone::TokenEngine granted access
2020-08-03T16:24:14.853+0000 7fd4f24d1700 20 get_system_obj_state:
rctx=0x7fd59fe3b778
obj=default.rgw.meta:users.uid:7c0fddbf5297463e9364ee3aed681077$7c0fddbf5297463e9364ee3aed681077
state=0x55bcca5cc0a0 s->prefetch_data=0
2020-08-03T16:24:14.853+0000 7fd4f24d1700 10 cache get:
name=default.rgw.meta+users.uid+7c0fddbf5297463e9364ee3aed681077$7c0fddbf5297463e9364ee3aed681077
: hit (requested=0x6, cached=0x7)
2020-08-03T16:24:14.853+0000 7fd4f24d1700 20 get_system_obj_state:
s->obj_tag was set empty
2020-08-03T16:24:14.853+0000 7fd4f24d1700 10 cache get:
name=default.rgw.meta+users.uid+7c0fddbf5297463e9364ee3aed681077$7c0fddbf5297463e9364ee3aed681077
: hit (requested=0x1, cached=0x7)
2020-08-03T16:24:14.853+0000 7fd4f24d1700  2 req 109 0s swift:list_bucket
normalizing buckets and tenants

*2020-08-03T16:24:14.853+0000 7fd4f24d1700 10 s->object=<NULL>
s->bucket=7c0fddbf5297463e9364ee3aed681077/containerA*2020-08-03T16:24:14.853+0000
7fd4f24d1700  2 req 109 0s swift:list_bucket init permissions
2020-08-03T16:24:14.853+0000 7fd4f24d1700 20 get_system_obj_state:
rctx=0x7fd59fe3ab18
obj=default.rgw.meta:root:7c0fddbf5297463e9364ee3aed681077/containerA
state=0x55bcca5cc0a0 s->prefetch_data=0

On Mon, Aug 3, 2020 at 9:40 AM Vladimir Sigunov <vladimir.sigunov@xxxxxxxxx>
wrote:

> Hello community,
>
> Im trying to integrate ceph RadosGW with OpenStack Keystone. Everything is
> working as expected, but when I try to reach public buckets with public
> link generated in Horizon, I get a permanent error ‘NoSuchBucket’. However,
> this bucket and all it’s content does exists: I can access it as
> authenticated user in Horizon, I can access it as authenticated user via S3
> browser/aws cli, I can see it with radosgw-admin bucket list --bucket
> <bucket>. We are running OpenStack Rocky and this issue appears to be with
> Ceph Octopus 15.2.4 (there was no issues with RGW on Nautilus and Luminous).
>
> Here is my configuration file:
>
>  <...>
>
> [client.rgw.ceph-hdd-9.rgw0]
>
> host = ceph-hdd-9
>
> keyring = /var/lib/ceph/radosgw/ceph-rgw.ceph-hdd-9.rgw0/keyring
>
> log file = /var/log/ceph/ceph-rgw-ceph-hdd-9.rgw0.log
>
> rgw frontends = beast endpoint=10.10.200.179:8080
>
> rgw thread pool size = 512
>
>
>
> rgw zone = default
>
>
>
> rgw keystone api version = 3
>
> rgw keystone url = https://<keystone url>:13000
>
> rgw keystone accepted roles = admin, _member_, Member, member, creator,
> swiftoperator
>
> rgw keystone accepted admin roles = admin, _member_
>
> #rgw keystone token cache size = 0
>
> #rgw keystone revocation interval = 0
>
> rgw keystone implicit tenants = true
>
> rgw keystone admin domain = default
>
> rgw keystone admin project = service
>
> rgw keystone admin user = swift
>
> rgw keystone admin password = swift_osp_password
>
> rgw s3 auth use keystone = true
>
> rgw s3 auth order = local, external
>
> rgw user default quota max size = -1
>
> rgw swift account in url = true
>
> rgw dynamic resharding = false
>
> rgw bucket resharding = false
>
> rgw enable usage log = true
>
> rgw usage log tick interval = 30
>
> rgw usage log flush threshold = 1024
>
> rgw usage max shards = 32
>
> rgw usage max user shards = 1
>
> rgw verify ssl = false
>
>
>
> Please advise.
>
> Thank you in advance for your help,
>
> Vladimir
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux