Re: RGW/ServiceMap etc

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm convinced this is fine.  To Sage's point, iiuc yes, radosgw-admin
has paths that initialize less of the RADOS cluster, and we should
certainly use those.

My primary concern, on balance, is on exposing internal classes as
apis, and what Orit is describing does avoid that.  Secondarily, I am
concerned about memory and resource growth over time, not issues
w/restarts of ceph-mgr.  Hopefully, the minimal bootstrap approach
will make that concern moot.

Matt

On Mon, Mar 12, 2018 at 11:34 AM, John Spray <jspray@xxxxxxxxxx> wrote:
> On Sun, Mar 11, 2018 at 11:14 AM, Orit Wasserman <owasserm@xxxxxxxxxx> wrote:
>> Hi John,
>>
>> On Sun, Mar 11, 2018 at 12:55 PM, John Spray <jspray@xxxxxxxxxx> wrote:
>>> On Sat, Mar 10, 2018 at 12:21 AM, Gregory Farnum <gfarnum@xxxxxxxxxx> wrote:
>>>> On Fri, Mar 9, 2018 at 3:44 PM, Sage Weil <sweil@xxxxxxxxxx> wrote:
>>>>> On Fri, 9 Mar 2018, Casey Bodley wrote:
>>>>>> On 03/09/2018 11:55 AM, Sage Weil wrote:
>>>>>> > On Fri, 9 Mar 2018, Casey Bodley wrote:
>>>>>> > > I haven't done much with ceph-ansible, but I'm imagining it looking like
>>>>>> > > this:
>>>>>> > >
>>>>>> > > In the rgw configuration, we'd have a variable like
>>>>>> > > radosgw_management_user
>>>>>> > > that would trigger a 'radosgw-admin user create' command to create it and
>>>>>> > > remember its keys for use during the ceph-mgr deployment.
>>>>>> > >
>>>>>> > > If the ceph-mgr deployment has to happen first, it could always generate
>>>>>> > > its
>>>>>> > > own secret/access keys (which is trivial to do), and supply them later
>>>>>> > > during
>>>>>> > > rgw deployment via 'radosgw-admin user create --access-key=X --secret=Y'.
>>>>>> > I think this is missing the bigger picture.  Setting aside the key issue
>>>>>> > for a minute, there needs to be some API endpoint that allows you to
>>>>>> > manipulate the zones/zone groups/realms (e.g., to create the radosgw
>>>>>> > cluster to begin with).  Creating an initial key for that zone is just one
>>>>>> > piece of that.
>>>>>> >
>>>>>> > For example, a dashboard user should be able to click on the RGW tab and
>>>>>> > create a new realm or zone and then kick off work to instantiate the
>>>>>> > radosgw daemons to serve it (via kubernetes, ansible, or whatever).
>>>>>>
>>>>>> Hi Sage,
>>>>>>
>>>>>> I didn't know that we were looking to drive new cluster deployments through
>>>>>> ceph-mgr. But I think that the multisite configuration steps to make that work
>>>>>> belong in the deployment tool itself. Ali has done work on this for
>>>>>> ceph-ansible at https://github.com/ceph/ceph-ansible/pull/1944, which runs all
>>>>>> of the radosgw-admin commands on the new cluster to add it to an existing
>>>>>> multisite realm.
>>>>>
>>>>> I think the general goal is that any management functions beyond the
>>>>> initial bootstrap of the cluster (mon + mgr) can be driven via the
>>>>> management ui.  But even setting aside the multi-cluster parts, the very
>>>>> first thing I would expect to see on the RGW pane of the dashboard is a
>>>>> via of the realms, zonegroups, and zones, with a bunch of 'create'
>>>>> buttons.  And the first thing you'd see on a fresh cluster is no zones--I
>>>>> don't think we want to force the user to predeclare that they will be
>>>>> creating a new realm/zg/zone when they create the cluster.
>>>>>
>>>>> Even for the multi-cluster parts, I would expect to see a limited view of
>>>>> that state.  The RBD panel, for instance, shows all the rbd-mirror
>>>>> daemons and their state.  Operations like switching masters or
>>>>> triggering a period change or whatever seem like a natural fit here.
>>>>>
>>>>> Even if we decide these operations don't fit or don't belong in the
>>>>> per-cluster dashboard, we'll want them in some meta-dashbaord (e.g.,
>>>>> cloudforms), and we'll want an API that that system can trigger to make
>>>>> things happen.  That could be ansible, yes, but it seems like we'd want
>>>>> to keep our options open.
>>>>
>>>> Wanted to pull this out a little more and see if everybody's on the same page.
>>>>
>>>> John, your initial email was very specific (so much though that I'm
>>>> not entirely sure what actual problem you're interested in here,
>>>> though I think the admin API being discussed is how we create RGW
>>>> users?).
>>>>
>>>> But I'd assume one of the most basic functions the manager is
>>>> interested in is creating new RGW instances within a cluster, whether
>>>> there's already an RGW zone or whether the admin just clicked a
>>>> "create S3 service" button.
>>>
>>> Yep.  While this thread was prompted by a very specific issue
>>> (dashboard uses admin rest api, but needs a way to learn where the API
>>> is and how to authenticate), the general context is that we would like
>>> to create a user interface that enables people to manage their Ceph
>>> clusters with a minimum of typing.
>>>
>>
>> +1
>>
>>> We already have the interfaces the UI needs for CephFS (libcephfs and
>>> mon commands) and RBD (librbd), and *most* of RGW (admin rest api).
>>> I'd like RGW to be just as much of a first class citizen in the UI as
>>> everything else, which is my motivation for looking for ways to avoid
>>> requiring extra out-of-band configuration before users can do RGW
>>> stuff in the UI.
>>>
>>
>> I think we can have a very small library for just the initial bootstrap phase,
>> this won't require to wrap all  radosgw-admin code but to cut a small
>> section out that can be shared as a library between the radosgw-admin,
>> ceph-mgr and the dashboard.
>> As for the realms/zone groups/zones (multisite) configuration I would
>> suggest using Rest API as
>> there are users that need such rest API anyway (they use their own in
>> house management UI).
>>
>> Will that work for you?
>
> I think that's the way to go.   Everything that can be done through
> the rgw rest api should be done through that route.
>
> John
>
>> Regards,
>> Orit
>>
>>
>>> John
>>>
>>>> As a comparison, the ongoing work to integrate CephFS, NFS-Ganesha,
>>>> and OpenStack is planning to have the manager automatically generate
>>>> new Ganesha instances as needed (by calling in to Kubernetes container
>>>> creation, which it will rely on). ceph-ansible might or might not have
>>>> something to do with that cluster having existed, but as far as the
>>>> admin is concerned ansible does not matter when creating cloud shares.
>>>>
>>>> Presumably we want to eventually be able to deploy RGW on existing
>>>> clusters the same way, via push buttons and automatic scaling where
>>>> appropriate, instead of having admins responsible for running a CLI to
>>>> enable each instance? Or am I just completely misunderstanding the
>>>> goals here? :)
>>>> -Greg



-- 

Matt Benjamin
Red Hat, Inc.
315 West Huron Street, Suite 140A
Ann Arbor, Michigan 48103

http://www.redhat.com/en/technologies/storage

tel.  734-821-5101
fax.  734-769-8938
cel.  734-216-5309
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux