Re: RGW/ServiceMap etc

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Mar 9, 2018 at 3:44 PM, Sage Weil <sweil@xxxxxxxxxx> wrote:
> On Fri, 9 Mar 2018, Casey Bodley wrote:
>> On 03/09/2018 11:55 AM, Sage Weil wrote:
>> > On Fri, 9 Mar 2018, Casey Bodley wrote:
>> > > I haven't done much with ceph-ansible, but I'm imagining it looking like
>> > > this:
>> > >
>> > > In the rgw configuration, we'd have a variable like
>> > > radosgw_management_user
>> > > that would trigger a 'radosgw-admin user create' command to create it and
>> > > remember its keys for use during the ceph-mgr deployment.
>> > >
>> > > If the ceph-mgr deployment has to happen first, it could always generate
>> > > its
>> > > own secret/access keys (which is trivial to do), and supply them later
>> > > during
>> > > rgw deployment via 'radosgw-admin user create --access-key=X --secret=Y'.
>> > I think this is missing the bigger picture.  Setting aside the key issue
>> > for a minute, there needs to be some API endpoint that allows you to
>> > manipulate the zones/zone groups/realms (e.g., to create the radosgw
>> > cluster to begin with).  Creating an initial key for that zone is just one
>> > piece of that.
>> >
>> > For example, a dashboard user should be able to click on the RGW tab and
>> > create a new realm or zone and then kick off work to instantiate the
>> > radosgw daemons to serve it (via kubernetes, ansible, or whatever).
>>
>> Hi Sage,
>>
>> I didn't know that we were looking to drive new cluster deployments through
>> ceph-mgr. But I think that the multisite configuration steps to make that work
>> belong in the deployment tool itself. Ali has done work on this for
>> ceph-ansible at https://github.com/ceph/ceph-ansible/pull/1944, which runs all
>> of the radosgw-admin commands on the new cluster to add it to an existing
>> multisite realm.
>
> I think the general goal is that any management functions beyond the
> initial bootstrap of the cluster (mon + mgr) can be driven via the
> management ui.  But even setting aside the multi-cluster parts, the very
> first thing I would expect to see on the RGW pane of the dashboard is a
> via of the realms, zonegroups, and zones, with a bunch of 'create'
> buttons.  And the first thing you'd see on a fresh cluster is no zones--I
> don't think we want to force the user to predeclare that they will be
> creating a new realm/zg/zone when they create the cluster.
>
> Even for the multi-cluster parts, I would expect to see a limited view of
> that state.  The RBD panel, for instance, shows all the rbd-mirror
> daemons and their state.  Operations like switching masters or
> triggering a period change or whatever seem like a natural fit here.
>
> Even if we decide these operations don't fit or don't belong in the
> per-cluster dashboard, we'll want them in some meta-dashbaord (e.g.,
> cloudforms), and we'll want an API that that system can trigger to make
> things happen.  That could be ansible, yes, but it seems like we'd want
> to keep our options open.

Wanted to pull this out a little more and see if everybody's on the same page.

John, your initial email was very specific (so much though that I'm
not entirely sure what actual problem you're interested in here,
though I think the admin API being discussed is how we create RGW
users?). But I'd assume one of the most basic functions the manager is
interested in is creating new RGW instances within a cluster, whether
there's already an RGW zone or whether the admin just clicked a
"create S3 service" button.

As a comparison, the ongoing work to integrate CephFS, NFS-Ganesha,
and OpenStack is planning to have the manager automatically generate
new Ganesha instances as needed (by calling in to Kubernetes container
creation, which it will rely on). ceph-ansible might or might not have
something to do with that cluster having existed, but as far as the
admin is concerned ansible does not matter when creating cloud shares.

Presumably we want to eventually be able to deploy RGW on existing
clusters the same way, via push buttons and automatic scaling where
appropriate, instead of having admins responsible for running a CLI to
enable each instance? Or am I just completely misunderstanding the
goals here? :)
-Greg
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux