Hi John, > A few major features we have planned include: > * Standalone servers (internally defined users/groups) No concerns here > * Active Directory Domain Member Servers In the second case, what is the plan regarding UID mapping? Is NFS coexistence planned, or a concurrent mount of the same directory using CephFS directly? In fact, I am quite skeptical, because, at least in my experience, every customer's SAMBA configuration as a domain member is a unique snowflake, and cephadm would need an ability to specify arbitrary UID mapping configuration to match what the customer uses elsewhere - and the match must be precise. Here is what I have seen or was told about: 1. We don't care about interoperability with NFS or CephFS, so we just let SAMBA invent whatever UIDs and GIDs it needs using the "tdb2" idmap backend. It's completely OK that workstations get different UIDs and GIDs, as only SIDs traverse the wire. 2. [not seen in the wild, the customer did not actually implement it, it's a product of internal miscommunication, and I am not sure if it is valid at all] We don't care about interoperability with CephFS, and, while we have NFS, security guys would not allow running NFS non-kerberized. Therefore, no UIDs or GIDs traverse the wire, only SIDs and names. Therefore, all we need is to allow both SAMBA and NFS to use shared UID mapping allocated on as-needed basis using the "tdb2" idmap module, and it doesn't matter that these UIDs and GIDs are inconsistent with what clients choose. 3. We don't care about ACLs at all, and don't care about CephFS interoperability. We set ownership of all new files to root:root 0666 using whatever options are available [well, I would rather use a dedicated nobody-style uid/gid here]. All we care about is that only authorized workstations or authorized users can connect to each NFS or SMB share, and we absolutely don't want them to be able to set custom ownership or ACLs. 4. We care about NFS and CephFS file ownership being consistent with what Windows clients see. We store all UIDs and GIDs in Active Directory using the rfc2307 schema, and it's mandatory that all servers (especially SAMBA - thanks to the "ad" idmap backend) respect that and don't try to invent anything [well, they do - BUILTIN/Users gets its GID through tdb2]. Oh, and by the way, we have this strangely low-numbered group that everybody gets wrong unless they set "idmap config CORP : range = 500-999999". 5. We use a few static ranges for algorithmic ID translation using the idmap rid backend. Everything works. 6. We use SSSD, which provides consistent IDs everywhere, and for a few devices which can't use it, we configured compatible idmap rid ranges for use with winbindd. The only problem is that we like user-private groups, and only SSSD has support for them (although we admit it's our fault that we enabled this non-default option). 7. We store ID mappings in non-AD LDAP and use winbindd with the "ldap" idmap backend. I am sure other weird but valid setups exist - please extend the list if you can. Which of the above scenarios would be supportable without resorting to the old way of installing SAMBA manually alongside the cluster? -- Alexander E. Patrakov _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx