On 15/02/2019 12:23, Matt Caswell wrote:
On 15/02/2019 03:55, Jakob Bohm via openssl-users wrote:
These comments are on the version of the specification released on
Monday 2019-02-11 at https://www.openssl.org/docs/OpenSSL300Design.html
General notes on this release:
- The release was not announced on the openssl-users and
openssl-announce mailing lists. A related blog post was
announced two days later.
Well the blog post was intended to *be* the announcement.
- The related strategy document is at
https://www.openssl.org/docs/OpenSSLStrategicArchitecture.html
(This link is broken on the www.openssl.org front page).
Fixed - thanks.
- The draft does not link to anywhere that the public can
inspect archived or version tracked document versions.
These documents have only just reached the point where they were stable enough
to make public and go into version control. Any future updates will go through
the normal review process for the web repo and be version controlled. The raw
markdown versions are here:
https://github.com/openssl/web/blob/master/docs/OpenSSL300Design.md
https://github.com/openssl/web/blob/master/docs/OpenSSLStrategicArchitecture.md
Pull requests and issues can be made via github in the normal way:
https://github.com/openssl/web/pulls
https://github.com/openssl/web/issues
Other comments inserted below where I have an opinion or something to say. I'm
hoping others will chip in on your other points:
Non-FIPS architecture issues:
- The identifiers for predefined parameters and values (such as
"fips", "on", "off", "aes-128-cbc" should be binary values that
cannot be easily searched in larger program files (by attackers).
This rules out both text strings, UUID values and ASN OID values.
Something similar to the function ids would be ideal. Note that
to make this effective, the string names of these should not
appear in linked binaries.
(The context of this is linking libcrypto and/or libssl into
closed source binary programs, since open source binaries cannot
hide their internal structure anyway).
- It should be possible for applications to configure OpenSSL to
load provider DLLs and config files from their own directories
rather than the global well-known directory (isolation from
system wide changes).
I believe this is the intention.
- It should be possible for providers (possibly not the FIPS
provider) to be linked directly into programs that link
statically to libcrypto. This implies the absence of
conflicting identifiers, a public API to pass the address of
a |OSSL_provider_init|function, all bundled providers provided
as static libraries in static library builds, and a higher
level init function that initializes both libcrypto and the
default provider.
The plan is that Providers may choose to be linked against libcrypto or not as
they see fit (the FIPS Provider will not be). They can be built entirely without
using any libcrypto symbols at all. They just need to have the well known entry
point. Any functions from the Core that the Provider may need to call are passed
as callback function pointers. I can't think of a reason why there should be an
issue with providers statically linking with libcrypto if they so wish.
This one is not about providers linked against libcrypto, it's
about applications linked against libcrypto3.a and provider-lib.a,
thus eliminating the DLL loading step.
- Static library forms of the default provider should not
force callers to include every algorithm just because they
are referenced from the default dispatch tables. For example,
it should be easy to link a static application that uses only
AES-256-CBC and SHA-256, and contains little else. Such limited
feature applications would obviously have to forego using the
all-inclusive high level init function.
- For use with engine-like providers (such as hardware providers
and the PKCS#11 provider), it should be possible for a provider
to provide algorithms like RSA at multiple abstraction levels.
For example, some PKCS#11 hardware provides the raw RSA
algorithm (bignum in, bignum out) while others provide specific
forms such as PKCS#1.5 signature. There are even some that
provide the PKCS#1.5 form with some hashes and the RSA form
as a general fallback.
I think this should be possible with the design as it stands. Providers make
implementations of algorithms available to the core. I don't see any reason why
they can't provide multiple implementations of the same algorithm (presumably
distinguished by some properties)
The case here is that some providers (such as certain Gemalto USB
smartcards) offer hardware implementation of RSA over arbitrary
bignums, leaving the PKCS formatting to libraries such as OpenSSL.
Experience with upgrading to better hashes in the past tells me it
is more robust if the PKCS formatting code is not pushed into the
provider in those cases. I have other cards in my collection that
act the other way round (insisting on doing the PKCS formatting to
prevent chosen plaintext attacks).
- Similarly, some providers will provide both ends of an
asymmetric algorithm, while others only provide the private
key operation, leaving the public key operation to other
providers (selected by core in the general way).
Again I believe this should be possible with the current design. We split
algorithm implementations into different "operations". I don't think there is
any reason to require a provider to implement all operations that an algorithm
is capable of (in fact I think that was the design intent). It might be worth
making the ability to do this more explicit in the document.
- The general bignum library should be exposed via an API, either
the legacy OpenSSL bignum API or a replacement API with an overlap
of at least one major version with both APIs available.
There are no plans to remove access to bignum.
It was missing from the component diagrams and vague text about
deprecating "legacy APIs" was not reassuring.
- Provider algorithm implementations should carry
description/selection parameters indicating limits to access:
"key-readable=yes/no", "key-writable=yes/no", "data-internal=yes/no",
"data-external=yes/no" and "iv-internal=yes/no". For example,
a smartcard-like provider may have "key-readable=no" and
"key-writable=yes" for RSA keys, while another card may have
"key-writable=no" (meaning that externally generated keys cannot
be imported to the card. "data-internal" refers to the
ability to process (encrypt, hash etc.) data internal to the
provider, such as other keys, while "data-external" refers to
the ability to process arbitrary application data.
We expect Provider authors to be able to define their own properties as they see
fit. We plan to create a central repository (outside the main source code) of
"common" names. So I think all of the above should be possible.
The idea was to make these standard properties, as they seem to
occur in many real world providers, from FIPS to MS CAPI. They also
affect which implementations can be used at various points in the
protocols.
- Variable key length algorithm implementations should carry
description/selection parameters indicating maximum and minimum
key lengths (Some will refuse to process short keys, others will
refuse long keys, some will require the key length to be a
multiple of some number).
There was a comment in the other reply. I think this simple list of
3 numeric properties (or perhaps a few more), would be enough to answer
the question "will this provider implementation handle this particular
key size"? No need for a mini language.
Examples: The FIPS provider 3.0.0 will explicitly enforce some minimum
key lengths for RSA and DH keys. A smart card in my collection requires
RSA keys to be a multiple of 64 bits (in addition to max and min lengths),
while another card from the same vendor has a different divider.
- The current EVP interface abuses the general (re)init operations
with omitted arguments as the main interface to update rapidly
changing algorithm parameters such as IVs and/or keys. With the
removal of legacy APIs, the need to provide parameter changing
as explicit calls in the EVP API and provider has become more
obvious.
Agreed that we will need to review the EVP interface to ensure that everything
you can do in the low-level interface is still possible (within reason). Note
though that in 3.0.0 we are only deprecating the low-level APIs not removing
them. The Strategic Architecture document (which has a view beyond 3.0.0) sees
us moving them to a libcrypto-legacy library (so they would still be available).*
If you do use the low-level APIs in 3.0.0 then they won't go via the Core/Providers.
(* I just spotted an error in the strategy document. The packaging diagram
doesn't match up with the text and doesn't show libcrypto-legacy on it - althogh
the text does talk about it. I need to investigate that)
Point would be to provide EVP methods to replace some already deprecated
low-level APIs. Currently fragile logic to do less when only changing the
IV is buried deep in each symmetric algorithm provider. Making this an
explicit provider method and making core dispatch that case accordingly
would improve code quality.
- A provider property valuable to some callers (and already a known
property of some legacy APIs) is to declare that certain simple
operations will always succeed, such as passing additional data
bytes to a hash/mac (the rare cases of hardware disconnect and/or
exceeding the algorithm maximums can be deferred to "finish"
operations). A name for this property of an algorithm
implementation could be "nofail=yes", and the list of non-failing
operations defined for each type of algorithm should be publicly
specified (a nofail hash would have a different list than a
no-fail symmetric encryption).
That's an interesting idea. Again Provider can define their own properties as
they see fit. We can certainly give consideration to any other properties that
we would like to have a "common" definition.
I believe this is a (non-public) property of some of default
implementations.
- Providers that are really bridges to another multi-provider API
(ENGINE, PKCS#11, MS CAPI 1, MS CNG) should be explicitly allowed
to load/init separately for each underlying provider. For example,
it would be bad for an application talking to one PKCS#11 module to
run, load or block all other PKCS#11 modules on the system.
The design allows for providers to make algorithm implementations
available/not-available over time. So I think this addresses what you are saying
here?
Loading a PKCS#11 module (or the equivalent for other APIs) has side
effects. Loading (or not) PKCS#11 modules (etc.) as needed should be
almost as easy as loading (or not) providers.
- Under normal file system layout conventions, /usr/share/ (and
below) is for architecture-independent files such as man pages,
trusted root certificates and platform-independent include files.
Architecture specific files such as "openssl/providers/foo.so"
and opensslconf.h belong in /usr/ or /usr/local/ .
I don't believe we've got as far as specifying the installation file system
layout - but this is useful input.
There were some unfortunate examples in the document.
FIPS-specific issues:
- The checksum of the FIPS DLL should be compiled into the FIPS-
capable OpenSSL library, since a checksum stored in its own file
on the end user system is too easily replaced by attackers. This
also implies that each FIPS DLL version will need its own file name
in case different applications are linked to different libcrypto
versions (because they were started before an upgrade of the shared
libcrypto or because they use their own copy of libcrypto).
This is not an attack that we are seeking to defend against in 3.0.0. We
consider the checksum to be an integrity check to protect against accidental
changes to the module.
While FIPS 140 level 1 might not, the higher FIPS levels seem very
keen on defending against these attacks, and the checksum at level 1
seems to be a degenerated remnant of those defenses.
- If possible, the core or a libcrypto-provided FIPS-wrapper should
check the hash of the opensslfips-3.x.x.so DLL before running any
of its code (including on-load stubs), secondly, the DLL can
recheck itself using its internal implementation of the chosen MAC
algorithm, if this is required by the CMVP. This is to protect the
application if a totally unrelated malicious file is dropped in
place of the DLL.
As above - this is not an attack we are seeking to defend against.
It is, however, a new attack made possible by moving the FIPS canister
into its own file.
- The document seems to consistently only mentions the
shortest/weakest key lengths, such as AES-128. Hopefully the
actual release will have no such limitation.
No - there is no such restriction. The full list of what we are planning to
support is in Appendix 3. Although I note that we explicitly mention key lengths
for some algorithms/modes but not others. We should probably update that to be
consistent.
Bad choices of examples then. I saw lots of mention of weak strength
stuff, such as 96 bits of entropy, AES-128 etc.
- The well-known slowness of FIPS validations will in practice
require the FIPS module compiled from a source change to be
released (much) later than the same change in the default
provider. The draft method of submitting FIPS validation
updates just before any FIPS-affecting OpenSSL release seems
overly optimistic.
- Similarly, due to the slowness of FIPS validation updates,
it may often be prudent to provide a root-cause fix in the
default provider and a less-effective change in the FIPS
provider, possibly involving FIPS-frozen workaround code in
libcrypto, either in core or in a separate FIPS-wrapper
component.
- The mechanisms for dealing with cannot-export-the-private-key
hardware providers could also be used to let the FIPS provider
offer algorithm variants where the crypto officer (application
writer/installer) specify that some keys remain inside the
FIPS blob, inaccessible to the user role (application code).
For example, TLS PFS (EC)DHE keys and CMS per message keys
could by default remain inside the provider. Extending this
to TLS session keys and server private key would be a future
option.
- In future versions, it should be possible to combine the
bundled FIPS provider with providers for FIPS-validated hardware,
such as FIPS validated PIV smart cards for TLS client
certificates.
The OpenSSL FIPS provider will provide algorithm implementations matching
"fips=yes". I see no reason why other providers can't do the same - so the above
should be possible.
Some wording in the document suggested this might be erroneously
blocked.
- Support for generating and validating (EC)DH and (EC)DSA
group parameters using the FIPS-specified algorithms should
be available in addition to the fixed sets of well-known
group parameters. In FIPS 800-56A rev 3, these are the
DH primes specified using a SEED value. Other versions of
SP 800-56A, and/or supplemental NIST documents may allow
other such group parameters.
- If permitted by the CMVP rules, allow an option for
application provided (additional) entropy input to the RNG
from outside the module boundary.
Thanks for the input and all of the suggestions.
Enjoy
Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark. Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded