On Sat, 2007-07-14 at 00:55 +0200, Jeroen van Meeuwen wrote: > Mike McGrath wrote: > > > > This is my worry too. It's almost enough to make me not want to do it > > for non Fedora projects but thats just bad. I'm hoping someone here has > > a good, clever way to solve this issue. The benefits of these new tools far outweigh the relatively slight risks. We really must step up and find a way to make it work. My vote is simple: we do the best we can, we spell out what the security is and the risks involved, and we put that in front of upstream projects. We ask them to agree (via email?) to the risk/reward balance we present. > So do I. A great deal of security would be achieved by having just a > small number of people actually know the GPG and RSA passwords, and have > them manually trigger the full commit/push. Then again, that requires > human interaction and isn't fully automated. ... and introduces risks we can't predict or mitigate. > I'm thinking of a way where the user's credentials could be used to > trigger the signed commit and push without needing the GPG/RSA password > for the user transifex, but I'm not sure if it's even remotely possible > to like 'share' these keys and authorize different users to use them, > without (again) compromising the security principle of these tokens. Let us remember the caveat of best being the enemy of good enough. Security risk assessment is never about, "No matter the cost, I will secure this until it is unbreakable." That guarantee comes from a pair of wire cutters used on the CAT(5) between the server and the switch. Great for security, bad for business. At first glance, compromising an upstream SCM via our servers might be harder than to directly attack the servers: 1. First gain access to the transifex server, which only has an address on the VLAN behind the firewall; 2. Compromise that box sufficiently to take control of either the transifex process or to have a shell as the transifex user; 3. This is made harder if we cook up the SELinux policy for transifex, which protects the overall system in the case of a bug; 4. Begin compromising upstream SCMs -- corruption and deletion are the two real risks here, right? Kanarip suggests human intervention decreases the risk. To that I have to add two concepts: social engineering, and can we trust those users not to be doing all this from a compromised system? Upstream SCM actually has the same risks -- they let few to hundreds to thousands of users have SCM access ... all who may have a compromised system ... all of whom are subject to social engineering. My back-of-the-envelope assessment says this: * On the average, Fedora Infra boxen are going to be more secure (dedicated staff, for example, compared to the average FLOSS project); * Our code that accesses upstream SCMs is behind multiple layers of security; * Our risk to upstream projects is the same or less than they have with every single user they give SCM access to. Remember, a risk assessment has to balance the rewards. In this case, the rewards are ENORMOUS: * Anyone notice how translate.fedoraproject.org and transifex have just solved in six weeks many of the complaints people have about Launchpad and Rosetta? * Do we think upstream projects are going to want the ability to add an army of translators? Full disclosure of the security measures in place should be enough for upstream to decide if the rewards are worth the risk. - Karsten -- Karsten Wade, 108 Editor ^ Fedora Documentation Project Sr. Developer Relations Mgr. | fedoraproject.org/wiki/DocsProject quaid.108.redhat.com | gpg key: AD0E0C41 ////////////////////////////////// \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
Attachment:
signature.asc
Description: This is a digitally signed message part