Re: https and self signed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



On Mon, June 20, 2016 13:16, Gordon Messmer wrote:
> On 06/20/2016 07:47 AM, James B. Byrne wrote:
>> On Sat, June 18, 2016 18:39, Gordon Messmer wrote:
>>
>>> I'm not interested in turning this in to a discussion on
>>> epistemology.
>>> This is based on the experience (the evidence) of some of the
>>> world's foremost experts in the field (Akamai, Cisco, EFF,
>>> Mozilla, etc).

I would rather look to Bruce Schneier and Noam Chomsky for guidance
before I would take security advice from organisations that have
already shown to be compromised in the matters of their clients'
security -- the EFF being the sole exception in the list provided.  Or
so I presently believe.

>> Really? Then why did you forward your reply a private message to a
>> public mailing list if not to do exactly what you claim you wish to
>> avoid?
>
> Accidents happen.  I didn't intentionally mail you off-list,
> and when I noticed that I had, seconds later, I re-sent the
> message to the list, expecting that you'd notice and understand
> that I intended to keep the conversation on the list.
>

Except that I get the list as a digest.  Which means that your
assumptions were wrong.  Funny that think you not?

> ..which isn't relevant to the question of what you consider "evidence"
> of security practice implications.
>
> Look, go to https://www.google.com/ right now and tell me what you
> see.

A snoop that self-signs its own certificates?

> Do you suddenly distrust the internet's single largest domain?  Do you
> think they implement poor security practices?
>

My distrust of Google developed over many years.  There was nothing
sudden about it.  But it is deep now.

>>> For someone who wants "evidence" you make a lot of unsupported
>>> assertions.  You do see the irony, don't you?

I assert my opinions if that is what you are referring to.  I do not
claim them to be fact.  I believe them to be true but I admit readily
that I may be wrong.  Indeed I most certainly must be wrong in some of
them.  My difficulty begin determining which ones.

However, I have formed my opinions on the basis of a long term
exposure to security matters both pre and post Internet.  And I have
seen before the same thoughtless enthusiasms for things shiny and
different in the security community. Things adopted and put into
practice without even the most cursory of trials and evaluations for
effectiveness and efficacy -- not to mention lawfulness on some
occasions --.  Sometimes I have had to deal with the consequences of
those choices at the pointy end of the stick.  Thus if I am to adopt a
different point of view then I require something in the way of
supporting measurable evidence to show that I am wrong and that others
are right.

>> The difference is that I state this is my opinion and I do not claim
>> it as a fact.  Your statement claimed a factual basis.  I was
>> naturally curious to see what evidence supported your claim.
>
> Citation required.
>
> Allow me an example.  To quote you:
> "The usual way a private key gets compromised is by theft or by
> tampering with its generation.  Putting yourself on a hamster wheel of
> constant certificate generation and distribution simply increases the
> opportunities for key theft and tampering."
>
> Now, when you asked "what possible benefit accrues from changing
> secured device keys on a frequent basis?" I pointed you to
> letsencrypt's documentation, which describes the benefits of
> 90-day certificates.

Having actual software in the possession of users rendered unusable by
a policy decision implemented in the name of security is not
beneficial. Referring to others self-justification of measures they
have already implemented is not evidence. It is argument.  Which has
its place providing that one accepts the fundamental postulates of the
positions being argued. These, in this case, require evidence.
Assertions that these measures solve certain perceived flaws without
addressing the costs of those measures is a one-side argument and not
very convincing in my opinion.

Refusing to deal with that is simply ignoring the elephant in the room.


>
> So, please describe how I am "claiming a factual basis" while you are
> not.
>
>> Automated security is BS.  It has always been BS and it always will
>> be BS.  That is my OPINION.  It may not be a fact for I lack
>> empirical evidence to support it.  However, it has long been my
>> observation that when people place excessive trust in automation
>> they are are eventually and inevitably betrayed by it.  Often at
>> enormous cost.
>
> This is what I consider "enormous cost":
> https://en.wikipedia.org/wiki/Heartbleed#Certificate_renewal_and_revocation
>
> After a major security bug which exposed private keys, hundreds of
> thousands of servers did not take the required action to secure their
> services, and the vast majority of those that took *some* action did
> it incorrectly and did not resolve the problem.
>
> Had those sites been using letsencrypt and renewing automatically,
> the exposed keys would have been replaced within 90 days
> (typically 60 max, so 30 days on average).  Instead, it is likely
> that the problem will remain a risk for "months, if not years, to
> come."
>
> And that's empirical evidence, which you have yet to offer.

Again, you miss the point. I am not offering evidence of something
that I am claiming as fact.  I am seeking evidence in support of what
someone else is claiming as fact.  Evading the question may be a good
rhetorical technique but it is hardly science. And your 'evidence' as
presented above presupposes a number of unspoken assumptions. Some of
which I fear would not stand scrutiny.

The consequences of Heartbleed are well known to me.  I had to tear
down and re-establish our entire PKI because of it.  However,
anecdotal references to specific cases where a particular practice
might, and let us remember that HB was out in the wild being exploited
for at least two years before being publicly revealed, just might have
provided some protection in some cases does not prove anything.  In
the case cited I do not believe changing certificates on an hourly
basis would have made much difference against an technically
proficient attacker exploiting the weakness.

It is well to recall how HB came to be.  A misguided attempt at an
improvement in protocol exchange which amounted to not much more than
gilding-the-lily and which since has been discarded without noticeable
loss.

What prevents similarly motivated 'improvements' to an automated
certificate authority from having equally damaging effects on the
robustness of the the certificates that they provide? We trusted
OpenSSL because it was open.  How do you trust an organisation's
internal practices once they have been automated?  Does anyone here
actually believe that once in production this automation is or will be
adequately documented?  Or that said documentation will be rigourously
maintained up-to-date?  Or that independent and competent audits will
be regularly conducted without notice?

I take particular care with my PKI.  I rebuild all the moduli for ssh
on all of our servers for SSH.  Because I trust no-one with my
security. And that includes RedHat and any other external
organisation, volunteer or commercial, open-source or proprietary. 
Automating your certificate replacement and entrusting it to an
outside provider is begging to have some state actor or other
financially powerful group subvert it.

Look at what the NSA pulled off with RSA. That was for just money. 
What about patriotism?  What would a true believer do if asked by his
state and they were in a position to act?  Think of the security
nightmare that would result from having the certificate production of
an automated 90 day certificate authority tampered with and left
undetected for even a modest amount of time, say 180 days.

At its core security is about risk analysis. And at the core of risk
analysis is cost/benefit analysis.  If the cost of a security measure
exceeds the value of what is being secured then it makes no sense.  I
am seeking that cost-benefit analysis for short term certificates. 
And I have not seen anything in the way of objective evidence that
supports the assertion that short-term certificates, or passwords for
that matter, provide any measurable security benefit other than the
feel-good sense that "at least its something".


For one thing, this position presupposes that the rest of your
security is so bad that frequent key compromise is inevitable.  For
another is assumes that the cost to users having to constantly deal
with change is negligible.  I run a business.  Let me assure you that
the cost of change is never negligible.

>
>> This impediment however is strictly an artefact of signing code with
>> short term certificates.  I simply had to reset the date on my MB
>> back to some future date when the certificate was valid and
>> everything worked fine.
>
> Apple's intermediate certs have a 10 year lifetime.  If you consider
> that "short term" then I fear that nothing is suitable in your
> opinion.

Apple evidently did not use those certificates to sign their software
in this case. So you point is irrelevant even if it may be true.

>
>> But hey, what is my time worth in comparison to the security those
>> certificates provided?  SECURITY that was trivially evaded in the
>> end.
>
> Fixing your clock is not "evading" security.

Setting a clock backwards in time by two years is 'fixing it'?  A
curious point of view in my opinion.

>
>> Exactly what mindless person or committee of bike-shedders decided
>> that software should be distributed so that copies of it expire?
>
> Expiration is a fundamental aspect of x509 certificates.  Do you
> understand x509 at all?

Expiry is a fundamental part of many things including life itself.
That does not imply that shorter is better.

Why sign software with an expiry date when you know that your recovery
programme will fail to operate after it expires?  Imagine that you are
on a hospital operating table and the defibrillators fail to function
because the certificate that signed the firmware has expired. And that
the immediate the fix is to simply reset the computer clock in the
defibrillator controller backwards in time.  Which of course no-one in
the OR knows.  So, too bad.  But, you were secure when you expired.

There is absolutely no sensible reason why the recovery software could
not have simply warned me that the signature had expired and then
asked me if I wished to proceed regardless.  Having made the design
decision that it would not do so then it was incumbent on the
organisation responsible to allow people to override it.  Instead they
offer up some weasel-worded warning about "TAMPERING" and
"CORRUPTION".  Just what a person in the middle of system recovery is
waiting to hear.

Most of what passes for informed opinion about security is folk remedy
dressed up with techno-babble and pushed onto an audience mostly too
embarrassed to admit that they do not understand what is being talked
about.  In my opinion of course.

However, decisions having real consequences are made on the basis of
such ignorant acceptance of received wisdom. Therefore I think it
important to challenge those that assert such things to produce
convincing evidence that what they say is so.

I understand that you believe that short term certificates and
passwords and such provide a measure of security.  That very well may
be the case.  I am not trying to convince you otherwise.  All I ask
for is a reasonable explanation of how much these practices cost those
that employ them, how much benefit they provide to those users and
what further risks they introduce.

Security is a funny sort of thing being mostly based on our fear of
the unknown; our too-active imaginations; and our attraction to the
spectacular at the cost of dealing adequately with the mundane.  At
one time an automobile had to be preceded by a pedestrian waving a red
flag. This too was done for the general security of all.  We do not do
it any more so there must have been a reason we stopped.  I suggest
that it was because the evidence did not support a positive
cost/benefit analysis.  But it sure sounded reasonable at the time to
people that had little or no experience with automobiles.

YMMV.

-- 
***          e-Mail is NOT a SECURE channel          ***
        Do NOT transmit sensitive data via e-Mail
 Do NOT open attachments nor follow links sent by e-Mail

James B. Byrne                mailto:ByrneJB@xxxxxxxxxxxxx
Harte & Lyne Limited          http://www.harte-lyne.ca
9 Brockley Drive              vox: +1 905 561 1241
Hamilton, Ontario             fax: +1 905 561 0757
Canada  L8E 3C3

_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
https://lists.centos.org/mailman/listinfo/centos



[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux