I didn't appreciate the flexibility of known_hosts or sshd_known_hosts until following up on Kurth's response; so thanks for that. However, personally, I lean toward #1. When reading #1, it seems to suggest having one key for the whole site. This I wouldn't do. Rather, whenever adding a second node and creating a cluster, just copy the keys from node one to node two. Each cluster will have unique keys. All applications on a cluster will have the same key. Doing this preserves notifications for possible MitM attacks, but doesn't require coordinated updates across the entire infrastructure. Implementing this solution does involve a great wipe, when you sync all existing clusters, but after that, it becomes merely procedural when you build new clusters or udpate existing clusters. Conceivably, you could update the clusters one at a time, or in small batches, but I would plan this so customers only see one broadcast announcement regarding key changes. One great wipe doesn't inure users; but three mini wipes on consecutive weekends could lull some. All that said, some follow-up considerations: If all your users are on relatively few systems, then implementing a client side sshd_known_hosts is more straight-forward. It would not require users to understand or mess with known_hosts themselves. Similarly, if you have NFS /home drives or CIFS windows profiles where the network homogenizes the known_hosts experience, then this solution gains favor. Speaking against updating known_hosts are differing clients on differing platforms. How does putty handle known_hosts? What are your current procedures for migrating applications? If an application move requires a server change, even if the IP is moved, what do customers do when the key changes? -- Jess Males -----Original Message----- From: listbounce@xxxxxxxxxxxxxxxxx [mailto:listbounce@xxxxxxxxxxxxxxxxx] On Behalf Of Steve Bonds Sent: Thursday, September 17, 2009 7:53 PM To: secureshell@xxxxxxxxxxxxxxxxx Subject: Clusters, known_hosts, host keys, and "REMOTE HOST IDENTIFICATION HAS CHANGED" SSH List-dwellers: I'm using OpenSSH in an environment with lots of clusters. These clusters have IP addresses which are associated with a particular application rather than with a particular host. Oftentimes (especially for file transfers) it's helpful to ssh/scp to the IP address associated with the application rather than the one associated with the host. However, given that each host has its own host key, we frequently get: WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! Which of course panics the user the first time they see it, and causes them to ignore it the second time onward-- neither of which are desired behaviors... I've thought about several solutions to this including: 1) Make all the host keys the same (hundreds of hosts, kind of diminishes the value of a host key...) 2) Configure ssh to ignore host key changes (harder than you might think since often new ssh clients are brought in) 3) Give each application its own dedicated ssh and host key (tricky to set up and monitor, fairly high effort) 4) Tweak OpenSSH so that it will accept any host key from a list (requires some programming effort, might not be a good idea) 5) Other? What do you all think of option 4? In particular, I was thinking that there might be a way to allow hosts on the same subnet to simply prompt to add the additional key for the same DNS name rather than popping up the man-in-the-middle warning. If there were multiple keys present in known_hosts for a given hostname, any of them would be accepted. Could this be done without weakening the host security of OpenSSH? Should I instead just hold The Great Re-Keying and go with option 1? I appreciate any advice. Thanks, -- Steve Bonds