Rados Gateway and Swift create containers/buckets that cannot be opened

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have a recent ceph (0.85-1109-g73d7be0) configured to use keystone for authentication:

$ cat ceph.conf
...
[client.radosgw.gateway]
host = ceph4
keyring = /etc/ceph/ceph.rados.gateway.keyring
rgw_socket_path = /var/run/ceph/$name.sock
log_file = /var/log/ceph/radosgw.log
rgw_data = /var/lib/ceph/radosgw/$cluster-$id
rgw_dns_name = ceph4
rgw print continue = false
debug rgw = 20
rgw keystone url = http://stack1:35357
rgw keystone admin token = tokentoken
rgw keystone accepted roles = admin Member _member_
rgw keystone token cache size = 500
rgw keystone revocation interval = 500
rgw s3 auth use keystone = true
nss db path = /var/ceph/nss/

So ceph4 is the rgw and stack1 is a devstack setup with keystone endpoints for S3 and Swift pointing to the ceph4 host:

$ keystone endpoint-list
...
| b884053b2c6f4217ad643c25c001217b | RegionOne | http://ceph4 | http://ceph4 | http://ceph4 | be62ab8531d143a7bce5ae6020d13918 | | d7a8338dd5684f5d8dfde406b0780462 | RegionOne | http://ceph4/swift/v1/ | http://ceph4/swift/v1/ | http://ceph4/swift/v1/ | c2d4550d71e94a6a966af810c9ad0568 |

When I create some buckets and keys using the S3 api (Boto) then I can list them and their contents (see attached)

demo-bucket0	2014-10-08T05:02:03.000Z
	hello.txt	12	2014-10-08T05:02:06.000Z

When I try a similar thing via swift:
$ swift upload container0 file
Object PUT failed: http://ceph4/swift/v1/container0/local.conf 404 Not Found NoSuchBucket

Hmm - using swift to list containers shows:

$ swift list
/container0
demo-bucket0

So a new bucket has been created, but note a leading '/' has been added to the name. Now retrying my simple s3 list gets:

/container0	2014-10-08T05:02:19.000Z
Traceback (most recent call last):
  File "./s3-test-ls.py", line 24, in <module>
    for key in bucket.list():
File "/usr/lib/python2.7/dist-packages/boto/s3/bucketlistresultset.py", line 30, in bucket_lister
    delimiter=delimiter, headers=headers)
File "/usr/lib/python2.7/dist-packages/boto/s3/bucket.py", line 392, in get_all_keys
    '', headers, **params)
File "/usr/lib/python2.7/dist-packages/boto/s3/bucket.py", line 343, in _get_all
    response.status, response.reason, body)
boto.exception.S3ResponseError: S3ResponseError: 404 Not Found
<?xml version="1.0" encoding="UTF-8"?><Error><Code>NoSuchBucket</Code></Error>


I'm guessing the leading '/' is the culprit.

The rgw logs (below) seems to show that the leadoing '/' is stripped off and then the bucket cannot be opened or listed - as it does not exist:

2014-10-08 18:39:24.764328 7f195bfd7700 1 ====== starting new request req=0x1284270 ===== 2014-10-08 18:39:24.764337 7f195bfd7700 2 req 17:0.000010::GET /container0/::initializing
2014-10-08 18:39:24.764340 7f195bfd7700 10 host=ceph4 rgw_dns_name=ceph4
2014-10-08 18:39:24.764361 7f195bfd7700 10 s->object=<NULL> s->bucket=container0 2014-10-08 18:39:24.764366 7f195bfd7700 2 req 17:0.000038:s3:GET /container0/::getting op 2014-10-08 18:39:24.764369 7f195bfd7700 2 req 17:0.000042:s3:GET /container0/:list_bucket:authorizing
2014-10-08 18:39:24.764372 7f195bfd7700 20 s3 keystone: trying keystone auth
2014-10-08 18:39:24.764390 7f195bfd7700 10 get_canon_resource(): dest=/container0/ 2014-10-08 18:39:24.764420 7f195bfd7700 20 sending request to http://stack1:35357/v2.0/s3tokens 2014-10-08 18:39:24.835591 7f195bfd7700 5 s3 keystone: validated token: demo:demo expires: 1412750365 2014-10-08 18:39:24.835671 7f195bfd7700 20 get_obj_state: rctx=0x1285820 obj=.users.uid:f535ae4f66654326807c556acff2697e state=0x12c3348 s->prefetch_data=0 2014-10-08 18:39:24.835686 7f195bfd7700 10 cache get: name=.users.uid+f535ae4f66654326807c556acff2697e : hit 2014-10-08 18:39:24.835694 7f195bfd7700 20 get_obj_state: s->obj_tag was set empty 2014-10-08 18:39:24.835700 7f195bfd7700 10 cache get: name=.users.uid+f535ae4f66654326807c556acff2697e : hit 2014-10-08 18:39:24.835731 7f195bfd7700 2 req 17:0.071403:s3:GET /container0/:list_bucket:reading permissions 2014-10-08 18:39:24.835756 7f195bfd7700 20 get_obj_state: rctx=0x7f195bfd61d0 obj=.rgw:container0 state=0x12901c8 s->prefetch_data=0 2014-10-08 18:39:24.835763 7f195bfd7700 10 cache get: name=.rgw+container0 : type miss (requested=22, cached=0)
2014-10-08 18:39:24.837125 7f195bfd7700 10 cache put: name=.rgw+container0
2014-10-08 18:39:24.837160 7f195bfd7700 10 moving .rgw+container0 to cache LRU end 2014-10-08 18:39:24.837180 7f195bfd7700 10 read_permissions on container0(@[]): only_bucket=0 ret=-2002 2014-10-08 18:39:24.837231 7f195bfd7700 2 req 17:0.072903:s3:GET /container0/:list_bucket:http status=404 2014-10-08 18:39:24.837239 7f195bfd7700 1 ====== req done req=0x1284270 http_status=404 ======
2014-10-08 18:39:24.837253 7f195bfd7700 20 process_request() returned -2002




#!/usr/bin/python

import boto
import boto.s3.connection

access_key = 'redacted'
secret_key = 'redacted'

conn = boto.connect_s3(
    aws_access_key_id = access_key,
    aws_secret_access_key = secret_key,
    host = 'ceph4',
    is_secure=False,               # uncommmnt if you are not using ssl
    calling_format = boto.s3.connection.OrdinaryCallingFormat(),
    )

for bucket in conn.get_all_buckets():
    print "{name}\t{created}".format(
        name = bucket.name,
        created = bucket.creation_date,
        )
    for key in bucket.list():
        print "\t{name}\t{size}\t{modified}".format(
        name = key.name,
        size = key.size,
        modified = key.last_modified,
        )


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux