On 05/31/2012 02:38 PM, Alan Cox wrote:
It's of course all a bit of a joke because it's then a simple matter of
using virtualisation to fake the "secure" environment and running the
"secure" OS in that 8)
The distributions can review the hypervisor code (then sign it as a
symbol of trust) and the kernel can then verify its integrity at runtime
(just like the firmware verifies the bootloader's integrity, and the
bootloader the kernel's). The hypervisor can then emulate secure boot
for the virtual machines and continue the chain. Note that you're
already half-way there with KVM, since most of its code runs in the
kernel itself.
Other hypervisors wouldn't be any different from any other package
offered by the distribution, at least if the maintainers provide
security support for all of them (as is the case for most serious
distros). It would be pointless to sign and verify every binary, library
and script on the system if the code isn't trusted.
Mature infrastructure for integrity checking already exists: most IDS do
file change tracking (they would need to be explicitely supported by the
kernel though), but see the Linux Integrity Subsystem (in the form of
the Integrity Measurement Architecture and the Extended Verification
Module).
(Repost: http://mjg59.dreamwidth.org/12368.html?thread=399184#cmt399184)
No. I would assume the Fedora project pays the $99, and then distrubtes
the signed bootloader component, with the fedora keys built in.
I don't believe that would be compliant with the Fedora Project
definitions of freedom.
[Reply not directed at anyone in particular, I'm just rambling from now on]
Let's just see it from another perspective. The obvious alternative is
for Fedora and every other distribution to ask every single hardware
vendor to include their own key in their firmwares. It's impossible,
plain and simple. The communications channels would have to be
super-efficient (they never are with such bureaucracy), the policies
extra-clear, and the upgrade processes absolutely smooth (to accommodate
with new software distributions, key expiration and thefts). I almost
see an entire stack of protocols there, a long standardization process,
and we still exclude offline or rarely connected users who won't be able
to keep up with slower channels.
So, as always in this kind of situation, we add another level of
indirection, in the form of a trust broker. Hardware vendors trust a
few big brokers in the software industry (more than a dozen would be
ideal), and these brokers in turn place their trust in software
distributors and make it their job to see that these distributors don't
abuse that trust, blacklisting those who do. Not all communities would
have the resources to do that (although I think the Linux Foundation
should act as a broker). Companies in the software industry can, and
they already have the contacts with the OEMs, let them do it.
If anything, $100 for such a service with no apparent human verification
is probably too low. Many attackers would probably pay-up to have their
malware compromise as many machines as possible before they get
blacklisted. Infected machines can certainly be stopped from receiving
updates, so the revocations will only stop the spreading of the botnets,
not shut them down. If they can spare $100 every few months (they
probably make way more from their botnet business) to get a new key,
they might just be alright. The system may stop the weakest players though.
Another big point for making people pay to have their components signed,
aside from financing the operation properly (necessary to keep the root
key really secure and push the revocations in time) and discouraging
small attackers, is that it's increasingly difficult to pay
electronically anonymously, at least as long as no broker trusted by the
vendors starts issuing signed components for anonymous money (BTC for
example). Obviously, and just like the PKI for X509 CA hierarchy, the
weakest link of the chain can fail the whole system (which is why there
shouldn't be too many brokers, see the current web browsers struggle and
last year f*-ups).
Now, we only have Microsoft, but hopefully this is only the beginning.
Aside from all the obvious potential abuses (blocking free software
arbitrarily), the technology is actually quite sound if we can control
it. And the good news is, we can.
We can and we must. Even though this all looks relatively good(-ish),
in reality it's merely good enough (a) for the average user who doesn't
care too much about having to trust big corporations to more or less
know what they want and to act in their favor and (b) for the average
big distribution which doesn't care too much about giving away contact
information and a little bit of money. (BTW, if you insist you don't
fit that user description yet don't use something like Monkeysphere for
your web browsing, then you're a hypocrite.)
The good we get out of it is that we can automate every step of the
verification process; we all know that security has to be traded for
convenience, so here we pay for completely transparent security simply
by using a sub-optimal trust model that is still way better than nothing
(even though HTTPS w/ X509 PKI is so horribly broken, it still
undeniably prevented an extraordinary amount of real attacks).
But for the more concerned users (this doesn't only include technical
people), good enough doesn't cut it. The bad we get is too inconvenient
by itself and so we figure that while we're at it, we might as well
trade that inconvenience for actual security, rather than worries and
doubts. For now, I think most power-users will generate their own keys
and verify each organization by hand (entrusting them via boot loader
shims or signing components directly), taking over the role of the big
corporations, but with only one easy customer: themselves.
In the (far, but not too far) future, I see value for a completely
decentralized model for signing components by individuals and small
organizations, with fancy algorithms to determine if you should or
should not run certain pieces of software, based on what people you
trust think about them (with configurable criteria such has "the code
must have been reviewed by at least two trustworthy people in the last X
weeks). And not just for boot loaders and kernels, but for all software
packages, as I already explained. (Untrusted software could
automatically be run in a sandbox of some appropriate level, for example
with SELinux or LXC.)
Well, we're most certainly not there yet, but I really do see potential
in that tech. I guess it's up to us, as always; I just hope we're not
gonna make the same mistakes as we did with the web again.
...
Or maybe everybody's gonna disable it, what do I know anyway ;).
--
t
--
users mailing list
users@xxxxxxxxxxxxxxxxxxxxxxx
To unsubscribe or change subscription options:
https://admin.fedoraproject.org/mailman/listinfo/users
Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines
Have a question? Ask away: http://ask.fedoraproject.org