Re: buckets and users

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



By the way,
Is it possible to run 2 radosgw on the same host?

I think I have created the zone, not sure if it was correct, because
it used the default pool names, even though I had changed them in the
json file I had provided.

Now I am trying to run ceph-radosgw with two different entries in the
ceph.conf file, but without sucess. Example:

[client.radosgw.gw]
host = GATEWAY
keyring = /etc/ceph/keyring.radosgw.gw
rgw socket path = /var/run/ceph/ceph.radosgw.gateway.fastcgi.sock
log file = /var/log/ceph/client.radosgw.gateway.log
rgw print continue = false
rgw dns name = gateway.local
rgw enable ops log = false
rgw enable usage log = true
rgw usage log tick interval = 30
rgw usage log flush threshold = 1024
rgw usage max shards = 32
rgw usage max user shards = 1
rgw cache lru size = 15000
rgw thread pool size = 2048

#[client.radosgw.gw.env2]
#host = GATEWAY
#keyring = /etc/ceph/keyring.radosgw.gw
#rgw socket path = /var/run/ceph/ceph.env2.radosgw.gateway.fastcgi.sock
#log file = /var/log/ceph/client.env2.radosgw.gateway.log
#rgw print continue = false
#rgw dns name = cephppr.local
#rgw enable ops log = false
#rgw enable usage log = true
#rgw usage log tick interval = 30
#rgw usage log flush threshold = 1024
#rgw usage max shards = 32
#rgw usage max user shards = 1
#rgw cache lru size = 15000
#rgw thread pool size = 2048
#rgw zone = ppr

It fails to create the socket:
2014-11-06 15:39:08.862364 7f80cc670880  0 ceph version 0.80.5
(38b73c67d375a2552d8ed67843c8a65c2c0feba6), process radosgw, pid 7930
2014-11-06 15:39:08.870429 7f80cc670880  0 librados:
client.radosgw.gw.env2 authentication error (1) Operation not
permitted
2014-11-06 15:39:08.870889 7f80cc670880 -1 Couldn't init storage
provider (RADOS)


What am I doing wrong?

Marco Garcês
#sysadmin
Maputo - Mozambique
[Skype] marcogarces


On Thu, Nov 6, 2014 at 10:11 AM, Marco Garcês <marco@xxxxxxxxx> wrote:
> Your solution of pre-pending the environment name to the bucket, was
> my first choice, but at the moment I can't ask the devs to change the
> code to do that. For now I have to stick with the zones solution.
> Should I follow the federated zones docs
> (http://ceph.com/docs/master/radosgw/federated-config/) but skip the
> sync step?
>
> Thank you,
>
> Marco Garcês
>
> On Wed, Nov 5, 2014 at 8:13 PM, Craig Lewis <clewis@xxxxxxxxxxxxxxxxxx> wrote:
>> You could setup dedicated zones for each environment, and not
>> replicate between them.
>>
>> Each zone would have it's own URL, but you would be able to re-use
>> usernames and bucket names.  If different URLs are a problem, you
>> might be able to get around that in the load balancer or the web
>> servers.  I wouldn't really recommend that, but it's possible.
>>
>>
>> I have a similar requirement.  I was able to pre-pending the
>> environment name to the bucket in my client code, which made things
>> much easier.
>>
>>
>> On Wed, Nov 5, 2014 at 8:52 AM, Marco Garcês <marco@xxxxxxxxx> wrote:
>>> Hi there,
>>>
>>> I have this situation, where I'm using the same Ceph cluster (with
>>> radosgw), for two different environments, QUAL and PRE-PRODUCTION.
>>>
>>> I need different users for each environment, but I need to create the
>>> same buckets, with the same name; I understand there is no way to have
>>> 2 buckets with the same name, but how can I go around this? Perhaps
>>> creating a different pool for each user?
>>>
>>> Can you help me? Thank you in advance, my best regards,
>>>
>>> Marco Garcês
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux