C GG <cgg0007@xxxxxxxxx> writes: > On Wed, May 30, 2018 at 2:50 PM, Stephen Frost <sfrost@xxxxxxxxxxx> wrote: > >> Greetings, >> >> * C GG (cgg0007@xxxxxxxxx) wrote: >> > On Wed, May 30, 2018 at 12:04 PM, Stephen Frost <sfrost@xxxxxxxxxxx> >> wrote: >> > > What's the reason for wishing for them to "be able to type in a >> > > password"? With GSSAPI/Kerberos, users get true single-sign-on, so >> they >> > > would log into the Windows system with a password and then have a TGT >> > > which can be used to authenticate to other services without having to >> > > type in their password over and over again. >> > >> > By and large, we're using pre-existing tools that would have to be >> heavily >> > modified to co-opt GSSAPI as an authentication method. For some >> > tools/applications, it's just not practical to use a ticket. But the >> > username/password paradigm is well supported. Most of these tools aren't >> > being used on Windows machines. That's not to say that Linux and macOS >> > don't have robust Kerberos tools available for use, but thinking that >> > Kerberos tickets are just floating out there in login-space waiting to be >> > used by psql and other tools isn't really the case in our environment. >> >> This argument doesn't really hold much weight. Anything using libpq is >> likely to work well with GSSAPI and most languages base their access to >> PG on libpq. Ensuring that a ticket is available also isn't hard with >> k5start or similar. Even proxying tickets with mod_auth_kerb or similar >> isn't all that difficult to get going to leverage SPNEGO. >> > > Sounds complicated. > > >> >> > The main reason for moving to LDAP(S) is the ability to enforce password >> > quality and rotation without the risk of disclosure ( >> > https://www.postgresql.org/docs/10/static/passwordcheck.html) ... >> Allowing >> > pre-hashed passwords to be sent to the back-end circumvents any >> protections >> > passwordcheck might give. Plus, passwordcheck isn't available in all >> > PostgreSQL environments (I'm specifically thinking of AWS RDS). >> >> This seems entirely out-of-place and not related to GSSAPI (pre-hashed >> passwords..?). Further, password quality and rotation would be able to >> be handled by AD instead of trying to do it in PG, though this also >> seems to be conflating different things (are you talking about access >> from an application, whose password should be randomly generated to >> begin with, or users..?). >> >> > Correct. Password quality and rotation needs to be handled in AD instead of > trying to do it in PG. > > I'm feeling attacked for trying to work with the tools I have available to > me and the constraints I have been given. > > >> > Unless I've missed something GSSAPI is still out of consideration if >> we're >> > having to supply username/password combinations in connection strings. >> >> There continues to be some confusion here as with GSSAPI you >> specifically wouldn't need to include passwords in connection strings. >> >> > I am still wondering what can we do to speed LDAP(S) up? Is there a >> > speedier authentication delegation paradigm that utilizes >> username/password >> > from the client? >> >> Passing passwords around between different systems for authentication is >> likely to be expensive and insecure. Using SCRAM on PG would at least >> avoid the call out from the PG server to the LDAP server but then you >> would have different passwords on the different systems potentially. >> > The solution to these issues is to move away from passing passwords >> around, as Active Directory did. >> >> Thanks! >> >> Stephen >> > > Please let me be clear, this is not a question about whether or not to use > passwords. This is a question of how to determine the cause of and remedy a > slowdown retrieving data from PostgreSQL when using LDAP(S) to authenticate > PostgreSQL users. One of the sideline questions would be how to achieve the > same effect by using a different scheme. I should further clarify that a > major requirement would be that the scheme would need to work in our > current environment without having to re-engineer the client applications. > That would entail the need to pass a username and password as we have > traditionally done. > You could see if using openLDAP performs better. In our environment, we have both AD and openLDAP. We use the AD password sync service to synchronise passwords across AD and LDAP. Attributes in both systems are managed by MS Forefront. Our environment has a large mix of technologies - servers are roughly evenly split between Linux and MS - still probably more Linux, though MS has been increasing in recent years. Databases are a mix of Oracle and Postgres plus a smattering of MySQL. Staff numbers are around 3k with about 60% on MS, 35% OSX and 5% Linux. Client base is about 80k. The reason we use both openLDAP and AD is because there are differences between the two which are important for some of our applications (for example, attributes which are single valued under LDAP standards but can be multi-valued under AD) and because we need additional schemas which are easy to implement in standards compliant LDAP, but difficult in AD. We also found that when just requiring LDAP functionality, openLDAP out performed AD. How easily this can be done in your environment will depend on your identity management solution. Depending on what that is, it may be as easy as just adding another downstream target and a few mapping rules. In this case, it would probably be an overall win. However, if your IAM system cannot manage things easily, this is probably not practical. There has been another thread regarding LDAP performance where the issue looks like it could be a DNS related problem. It seems establishing connections is close when LDAP address is specified using name and faster when just an IP address is used. This could be something else you should look at. We had an issue a while back where our central IT provider made changes to DNS to improve security and enabling better handling of misbehaving clients - essentially, it was a DNS throttling configuration which would temporarily block requests from an IP if the number of requests being made was above a specified threshold. This caused some initial problems for us as we found some application libraries did not perform proper DNS caching and would regularly exceed the threshold. It also took some trial and error to get the right watermark for the throttling. A simple test like using IP address rather than name would likely help to identify if DNS related issues could be the cause or whether it is just an AD specific issue. Definitely check AD logs as well - the issue could be simply that adding a new system has increased demand sufficiently to degrade performance of AD (though I would expect there would be complaints from others outside the DB area if this was the case). The GSSAPI approach is not as complicated as it sounds, but it can be affected by environment/infrastructure architecture and it will be critical to ensure you have good time synchronisation. This can be somewhat challenging in hybrid environments where you have a mix of local and remote services. When it all works, it is great, but when you do have a problem, diagnosis can be challenging. The overall approach of having one identity with one password per entity is IMO the right approach and your only hope for good password policy application. However, getting to that point can be very challenging. -- Tim Cross