On top of this. In my attempts to create a read-only user I think I
found another issue::
radosgw-admin subuser create --subuser=s3test:fun --key-type=s3
--gen-access-key --gen-secret
radosgw-admin subuser modify --subuser=s3test:fun --access="read"
{
"user_id": "s3test",
"display_name": "s3test",
"email": "",
"suspended": 0,
"max_buckets": 1,
"auid": 0,
"subusers": [
{
"id": "s3test:fun",
"permissions": "read"
}
],
"keys": [
{
"user": "s3test:fun",
"access_key": "N8Z8IJ1JK6A6ECB41VLV",
},
{
"user": "s3test",
"access_key": "ZREKTGN633R2U87OS8ZN",
}
],
"swift_keys": [],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"placement_tags": [],
"bucket_quota": {
"enabled": true,
"max_size_kb": -1,
"max_objects": 2
},
"user_quota": {
"enabled": false,
"max_size_kb": -1,
"max_objects": -1
},
"temp_url_keys": []
}
The above results in a set of keys that can only create buckets but not
delete them. I can't create objects with this user either which is half
the goal but I can still create a million buckets if I wanted which can
make things very painful for a primary user. Is there a way to set it so
that the subuser can not create buckets either?
On 1/28/16 10:14 AM, seapasulli@xxxxxxxxxxxx wrote:
Ah thanks for the clarification. Sorry. so even setting max_buckets to
0 will not prevent them from creating buckets:::
lacadmin@ko35-10:~$ radosgw-admin user modify --uid=s3test
--max-buckets=0
{
"user_id": "s3test",
"display_name": "s3test",
"email": "",
"suspended": 0,
"max_buckets": 0,
"auid": 0,
"subusers": [],
"keys": [
{
"user": "s3test:whoami",
},
{
"user": "s3test",
}
],
"swift_keys": [],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"placement_tags": [],
"bucket_quota": {
"enabled": true,
"max_size_kb": -1,
"max_objects": 2
},
"user_quota": {
"enabled": false,
"max_size_kb": -1,
"max_objects": -1
},
"temp_url_keys": []
}
-----------------------------
-----------------------------
-----------------------------
In [5]: conn.get_canonical_user_id()
Out[5]: u's3test'
In [6]: conn.create_bucket('test')
Out[6]: <Bucket: test>
In [7]: conn.create_bucket('test_one')
Out[7]: <Bucket: test_one>
In [8]: conn.create_bucket('test_two')
Out[8]: <Bucket: test_two>
In [9]: conn.create_bucket('test_three')
Out[9]: <Bucket: test_three>
In [10]: conn.create_bucket('test_four')
Out[10]: <Bucket: test_four>
In [11]: for bucket in conn.get_all_buckets():
....: print(bucket.name)
....:
test
test_four
test_one
test_three
test_two
In [12]: for bucket in conn.get_all_buckets():
....: conn.delete_bucket(bucket.name)
----------------------------
----------------------------
----------------------------
lacadmin@ko35-10:~$ radosgw-admin user modify --uid=s3test
--max-buckets=1
{
"user_id": "s3test",
"display_name": "s3test",
"email": "",
"suspended": 0,
"max_buckets": 1,
-----------------------------
-----------------------------
-----------------------------
In [15]: conn.create_bucket('s3test_one')
Out[15]: <Bucket: s3test_one>
In [16]: conn.create_bucket('s3test_two')
---------------------------------------------------------------------------
S3ResponseError Traceback (most recent call
last)
<ipython-input-16-9670addb5358> in <module>()
----> 1 conn.create_bucket('s3test_two')
/usr/lib/python2.7/dist-packages/boto/s3/connection.pyc in
create_bucket(self, bucket_name, headers, location, policy)
502 else:
503 raise self.provider.storage_response_error(
--> 504 response.status, response.reason, body)
505
506 def delete_bucket(self, bucket, headers=None):
S3ResponseError: S3ResponseError: 400 Bad Request
<?xml version="1.0"
encoding="UTF-8"?><Error><Code>TooManyBuckets</Code></Error>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com