Re: Ceph questions regarding auth and return on PUT from radosgw

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ok so that's the reason for the keys then? To be able to stop an osd
from connecting to the cluster? Won't "ceph osd down" or something
like that stop it from connecting?

Also, the rados command - doesn't that use the config file to connect
to the osds? If I have the correct config file won't that stop me from
accidentally connecting to some other cluster?

I can see how it might be more relevant when you have 1000 osds and
lots of people managing the cluster, but in my case the cluster won't
be all that big. At first I guess only 3 osds which may grow quite a
bit of course but I can't see us needing even 30 osds (we're not a
cloud hosting company and won't be giving access directly to the
storage from the outside).

Actually, it wouldn't be entirely necessary for us to run the radosgw,
however we do wan't an http interface and radosgw gives us one. I'm
not aware of any other http interface available right now (apart from
writing one ourselves, but we'd much rather use something that is
tested and works, we're a small company).

On Mon, Jun 11, 2012 at 2:51 PM, Wido den Hollander <wido@xxxxxxxxx> wrote:
>
>
> On 06/11/2012 02:41 PM, John Axel Eriksson wrote:
>>
>> Oh sorry. I don't think I was clear on the auth question. What I meant
>> was if the admin.keyring and keys for the osd:s are really necessary
>> in a private ceph-cluster.
>
>
> I'd say: Yes
>
> With keys in place you can ensure that a rogue machine starts bringing down
> your cluster.
>
> Scenario: You take a machine offline in a cluster, let it sit in storage for
> some while and a couple of months later somebody wonders what that machine
> does.
>
> Plugs it into a switch, power and boots. Suddenly this old machine which is
> way behind on software starts participating in your cluster again and could
> potentially bring it all down.
>
> But it could even be even more simple. You set up a second Ceph cluster for
> some tests, but while playing with the 'rados' command you accidentally
> connect to the wrong cluster and issue a "rmpool". Oops!
>
> With auth in place you have a barrier against such situations.
>
> Wido
>
>
>>
>> On Mon, Jun 11, 2012 at 2:40 PM, Wido den Hollander<wido@xxxxxxxxx>
>>  wrote:
>>>
>>> Hi,
>>>
>>>
>>> On 06/11/2012 02:32 PM, John Axel Eriksson wrote:
>>>>
>>>>
>>>> Is there a point to having auth enabled if I run ceph on an internal
>>>> network, only for use with radosgw (i.e the object storage part)?
>>>> It seems to complicate the setup unnecessarily and ceph doesn't use
>>>> encryption anyway as far as I understand, it's only auth.
>>>> If my network is trusted and I know who has access (and I trust them)
>>>> - is there a point to complicate the setup with key-based auth?
>>>>
>>>
>>> The RADOS Gateway uses the S3 protocol and that requires authentication
>>> and
>>> authorization.
>>>
>>> When creating a bucket/pool and storing objects, it has to be mapped to a
>>> users inside the RADOS GW.
>>>
>>> I don't know what your exact use-case is, but if it's only internal,
>>> isn't
>>> it a possibility to use RADOS natively?
>>>
>>>
>>>> Also, when PUTting something through radosgw, does ceph/rgw return as
>>>> soon as all data has been received or does it return
>>>> when it has ensured N replicas? (I've seen quite a delay after all
>>>> data has been sent before my PUT returns). I'm using nginx (1.2) by
>>>> the way.
>>>
>>>
>>>
>>> iirc it returns when all replicas have received and stored the object.
>>>
>>> Wido
>>>
>>>>
>>>> Thanks!
>>>>
>>>> John
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>>>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>>>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux