Re: Cert hot-reloading

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



1. Construe symlinks to current certs in a folder (old or new / file by file)
2. Symlink that folder
3. Rename the current symlink to that new symlink atomically.

On OpenSSL side statd would have to follow through on symlinks - if it shouldnt do so.

This is +- how kubernetes atomically provisions config maps and secrets to pods.

So there is a precedence for applications to follow this pattern.

I totally agree, that those constraints shall be put on applications in order to have the freedom to focuse on a sound design.

If openssl really wanted to make it easy it would provide an independent helper that would do exactly this operation on behalf of non-complying applications.

Does it look like we are actually getting somewhere here?

I'd still better understand why atomic pointer swaps can be difficult and how this can be mitigated. I'm sensing a bold move for a sounder certificate consumption is possible there too (with potential upsides further down). Do I sense right?


El lunes, 31 de agosto de 2020, Viktor Dukhovni <openssl-users@xxxxxxxxxxxx> escribió:
> On Aug 31, 2020, at 10:57 PM, Jakob Bohm via openssl-users <openssl-users@xxxxxxxxxxx> wrote:
>
> Given the practical imposibility of managing atomic changes to a single
> POSIX file of variable-length data, it will often be more practical to
> create a complete replacement file, then replace the filename with the
> "mv -f" command or rename(3) function.  This would obviously only work
> if the directory remains accessible to the application, after it drops
> privileges and/or enters a chroot jail, as will already be the case
> for hashed certificate/crl directories.

There is no such "impossibility", indeed that's what the rename(2) system
call is for.  It atomically replaces files.  Note that mv(1) can hide
non-atomic copies across file-system boundaries and should be used with
care.

And this is why I mentioned retaining an open directory handle, openat(2),
...

There's room here to design a robust process, if one is willing to impose
reasonable constraints on the external agents that orchestrate new cert
chains.

As for updating two files in a particular order, and reacting only to
changes in the one that's updated second, this behaves poorly when
updates are racing an application cold start.  The single file approach,
by being more restrictive, is in fact more robust in ways that are not
easy to emulate with multiple files.

If someone implements a robust design with multiple files, great.  I for
one don't know of an in principle decent way to do that without various
races, other than somewhat kludgey retry loops in the application (or
library) when it finds a mismatch between the cert and the key.

--
        Viktor.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux