It seems that your request did find its way to the gateway, but the question here is why doesn't it match to a known operation. This really looks like a valid list all buckets request, so I'm not sure what's happening.
I'd look at two things first. One is the '{fqdn}' string, which I'm not sure whether that's the actual string that you have, or whether you just replaced it for the sake of anonymity.
I replaced for anonymity thou I run on private IP but still :)
The second is the port number, which should be fine, but maybe the fact that it appears as part of the script uri triggers some issue.
Hmm will try with default port 80... though I would assume that anything before the 'slash' gets cut off as part of the hostname[:port] portion.
Makes not difference using port 80.
...
2015-02-27 18:15:43.402729 7f37889e0700 20 SERVER_PORT=80
2015-02-27 18:15:43.402747 7f37889e0700 20 SERVER_PROTOCOL=HTTP/1.1
2015-02-27 18:15:43.402765 7f37889e0700 20 SERVER_SIGNATURE=
2015-02-27 18:15:43.402783 7f37889e0700 20 SERVER_SOFTWARE=Apache/2.2.22 (Fedora)
2015-02-27 18:15:43.402814 7f37889e0700 1 ====== starting new request req=0x7f37b80083d0 =====
2015-02-27 18:15:43.403157 7f37889e0700 2 req 1:0.000345::GET /::initializing
2015-02-27 18:15:43.403491 7f37889e0700 10 host={fqdn} rgw_dns_name={fqdn}
2015-02-27 18:15:43.404624 7f37889e0700 2 req 1:0.001816::GET /::http status=405
2015-02-27 18:15:43.404676 7f37889e0700 1 ====== req done req=0x7f37b80083d0 http_status=405 ======
2015-02-27 18:15:43.404901 7f37889e0700 20 process_request() returned -2003
I'm not sure how to define my radosgw user, i made one with full rights & key type s3:
# radosgw-admin user info --uid='{user name}'
{ "user_id": "{user name}",
"display_name": "test user for testlab",
"email": "{email}",
"suspended": 0,
"max_buckets": 1000,
"auid": 0,
"subusers": [],
"keys": [
{ "user": "{user name}",
"access_key": "WL4EJJYTLVYXEHNR6QSA",
"secret_key": "{secret}"}],
"swift_keys": [],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"placement_tags": [],
"bucket_quota": { "enabled": false,
"max_size_kb": -1,
"max_objects": -1},
"user_quota": { "enabled": false,
"max_size_kb": -1,
"max_objects": -1},
"temp_url_keys": []}
When authenticating to the S3 API should I then use the unencrypted access key string or the encrypted seen above plus my secret?
Howto verify if I authenticate successfully through S3 maybe this is my problem?
test example:
#!/usr/bin/python
import boto
import boto.s3.connection
access_key = 'WL4EJJYTLVYXEHNR6QSA'
secret_key = '{secret}'
conn = boto.connect_s3(
aws_access_key_id = access_key,
aws_secret_access_key = secret_key,
host = '{fqdn}', port = 8005, debug = 1,
is_secure=False, # uncomment if you are not using ssl
calling_format = boto.s3.connection.OrdinaryCallingFormat(),
)
## Any access on conn object fails with 405 not allowed
for bucket in conn.get_all_buckets():
print "{name}\t{created}".format(
name = bucket.name,
created = bucket.creation_date,
)
bucket = conn.create_bucket('my-new-bucket')
How does one btw control/map a user to/with a Ceph Pool or will an user with full right be able to create Ceph Pools through the admin API?
I've added a pool to radosgw before creating my user with --pool=owmblob option not sure though that this will 'limit' a user to a default pool like that.
Would have thought that this would set the default_placement attribute on the user then.
Any good URLs to doc on the understanding of such matters as ACL, users and pool mapping etc in a gateway are also appreciated.
# radosgw-admin pools list
[
{ "name": "owmblob"}]