Auth to Multiple LDAP Domains? (attempt #2)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey all-

I tried sending this to the list on the 15th but it seems it was eaten
somewhere without a bounce as it never seems to have made it to even the
archive. So trying again with an alternate email and plaintext.

Original message follows:

#########################

Hello, all!

I'll be the first to admit that I'm fairly inexperienced with Ceph
internals, so my apologies if this is Ceph 101 that I'm not understanding.

It is my intent to provide multiple RGW endpoints, each of which uses a
separate LDAP (Active Directory, specifically) server (DSA) and
different domain/DIT structures/DSEs, for authentication - all using the
same cluster (and ideally the same pool).

I have indeed read through the relevant documentation article[0], and
while using a single LDAP source for authentication is straightforward,
I'm having troubles translating that into my desired end state.

I am running a manual (for Reasons™) installation (i.e. not via e.g.
cephadm) on AlmaLinux 8.5, using the Storage SIG[2] repositories. I am
running the Pacific release (16.2.7).

So here's some questions I currently have:


I. Is what I want to do even possible? (I suspect so, but am unclear on
the best way to accomplish it; see below.)

II. If I. is true, how would this be accomplished?

  A. It seems to be that this would be accomplished with Ceph realms[3].
Can this be accomplished with a single zone/zonegroup or does a
zonegroup and zone need to be created for each LDAP DSE (and/or each realm)?

   1. In other words, does each realm require its own distinct
zone/zonegroup?

    2. And are realms the proper way to go about this?

II. I obviously can run multiple RGW daemons per host, but can a single
RGW daemon serve multiple realms (if realms are indeed the way to
accomplish this)?

III. Are there plans to support direct userbind auth instead of
requiring a service account for LDAP auth?

  A. It is commonly and colloquially agreed that service accounts are
not best practice for a slew of reasons - supporting direct userbind, if
possible, would be ideal. I can understand that this does complicate the
codebase (especially in the event of things like user account caching),
but supporting direct userbind can remove a non-negligible amount of
operational overhead.

IV. I feel that I have a decent grasp of the hierarchy/architecture as
laid out by this article[4], but I find myself still struggling with the
RADOS-specific hierarchy (e.g. zones vs. realms, etc.). Are there any
recommended additional (canonical or third-party)  documentation for
best-practice/recommended architecturing and design of a cluster that
incorporate more complex RADOS setups aside from the Ceph documentation
for multisite[3]?

  A. For instance, are realms primarily geographic in nature as
zones/zonegroups are? Or is it a more abstract grouping that may or may
not span multiple geographical locations?

  B. What defines a zone vs. a zonegroup (aside from obviously "a
zonegroup is a collection of zones"); where would one split those out?
Are zonegroups akin to regions and zones for e.g. specific data centers
within that region typically?
        etc.


Many thanks in advance!


[0] https://docs.ceph.com/en/pacific/radosgw/ldap-auth/
[1] A project that intends to essentially be what CentOS was before
CentOS Stream was introduced; https://almalinux.org/
[2] https://wiki.centos.org/SpecialInterestGroup/Storage
[3] https://docs.ceph.com/en/pacific/radosgw/multisite/
[4] https://docs.ceph.com/en/pacific/architecture/

-- 
brent saner
https://square-r00t.net/
GPG info: https://square-r00t.net/gpg-info
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux