On 10/07/18 04:40, Gregory Farnum wrote:
On Sun, Jul 8, 2018 at 6:06 PM Graeme Gillies < ggillies@xxxxxxxxxx>
wrote:
Hi,
I was wondering how (if?) people handle rotating cephx keys
while
keeping cluster up/available.
Part of meeting compliance standards such as PCI DSS is
making sure that
data encryption keys and security credentials are rotated
regularly and
during other key points (such as notable staff turnover).
We are currently looking at using Ceph as a storage solution
and was
wondering how people handle rotating cephx keys (at the very
least, the
admin and client.$user keys) while causing minimal/no
downtime to ceph
or the clients.
My understanding is that if you change the keys stored in
the ceph kv db
then any existing sessions should still continue to work,
but any new
ones (say, a hypervisor establishing new connections to osds
for a new
vm volume) will fail until the key on the client side is
also updated.
I attempted to set two keys against the same client to see
if I can have
an "overlap" period of new and old keys before rotating out
the old key,
but it seems that ceph only has the concept of 1 key per
user.
Any hints, advice, or any information on how to achieve this
would be
much appreciated.
This isn't something I've seen come up much. Your
understanding sounds correct to me, so as a naive developer
I'd assume you just change the key in the monitors and
distribute the new one to whoever should have it. There's a
small window in which the admin with the old key can't do
anything, but presumably you can coordinate around that?
The big issue I'm aware of is that orchestration systems
like OpenStack don't always do a good job supporting those
changes — eg, I think it embeds some keys in its database
descriptor for the rbd volume? :/
-Greg
I think the biggest problem with simply changing keys is that lets
say I have a client connecting to ceph using a ceph.client.user
account. If I want to rotate the keys for that I can simply do that
ceph cluster side, but then I also need to do that on the client
side (in my case virtual machine hypervisors). DUring this window
(which might be tiny with decent tooling, but still non-zero) my
clients can't do new connections to the ceph cluster, which I assume
will cause issues.
I do wonder if an RFE to allow ceph auth to accept multiple keys for
client would be accepted? That way I would add my new key to the
ceph auth (so clients can authenticate with either key), then rotate
it out on my hypervisors, then remove the old key from ceph auth
when done.
As for Openstack, when I used it I was pretty sure it simply used
ceph.conf of the nova-compute hosts to connect to ceph (at least for
libvirt) however that doesn't mean it does something else for other
hypervisors or implementations.
Regards,
Graeme
|
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com