Re: migrate ceph-disk to ceph-volume fails with dmcrypt

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jan 23, 2019 at 4:01 AM Manuel Lausch <manuel.lausch@xxxxxxxx> wrote:
>
> Hi,
>
> thats a bad news.
>
> round about 5000 OSDs are affected from this issue. It's not realy a
> solution to redeploy this OSDs.
>
> Is it possible to migrate the local keys to the monitors?
> I see that the OSDs with the "lockbox feature" has only one key for
> data and journal partition and the older OSDs have individual keys for
> journal and data. Might this be a problem?

I don't know how that would look like, but I think it is worth a try
if re-deploying OSDs is not feasible for you.

The key api for encryption is *very* odd and a lot of its quirks are
undocumented. For example, ceph-volume is stuck supporting naming
files and keys 'lockbox'
(for backwards compatibility) but there is no real lockbox anymore.
Another quirk is that when storing the secret in the monitor, it is
done using the following convention:

    dm-crypt/osd/{OSD FSID}/luks

The 'luks' part there doesn't indicate anything about the type of
encryption (!!) so regardless of the type of encryption (luks or
plain) the key would still go there.

If you manage to get the keys into the monitors you still wouldn't be
able to scan OSDs to produce the JSON files, but you would be able to
create the JSON file with the
metadata that ceph-volume needs to run the OSD.

The contents are documented here:
http://docs.ceph.com/docs/master/ceph-volume/simple/scan/#json-contents

>
> And a other question.
> Is it a good idea to mix ceph-disk and ceph-volume managed OSDSs on one
> host?

I don't think it is a problem, but we don't test it so I can't say
with certainty.

> So I could only migrate newer OSDs to ceph-volume and deploy new
> ones (after disk replacements) with ceph-volume until hopefuly there is
> a solution.

I would strongly suggest implementing some automation to get all those
OSDs into 100% ceph-volume. The ceph-volume tooling for handling
ceph-disk OSDs is very
very robust and works very well, but it shouldn't be a long term
solution for OSDs that have been deployed with ceph-disk.

>
> Regards
> Manuel
>
>
> On Tue, 22 Jan 2019 07:44:02 -0500
> Alfredo Deza <adeza@xxxxxxxxxx> wrote:
>
>
> > This is one case we didn't anticipate :/ We supported the wonky
> > lockbox setup and thought we wouldn't need to go further back,
> > although we did add support for both
> > plain and luks keys.
> >
> > Looking through the code, it is very tightly couple to
> > storing/retrieving keys from the monitors, and I don't know what
> > workarounds might be possible here other than throwing away the OSD
> > and deploying a new one (I take it this is not an option for you at
> > all)
> >
> >
> Manuel Lausch
>
> Systemadministrator
> Storage Services
>
> 1&1 Mail & Media Development & Technology GmbH | Brauerstraße 48 |
> 76135 Karlsruhe | Germany Phone: +49 721 91374-1847
> E-Mail: manuel.lausch@xxxxxxxx | Web: www.1und1.de
>
> Hauptsitz Montabaur, Amtsgericht Montabaur, HRB 5452
>
> Geschäftsführer: Thomas Ludwig, Jan Oetjen, Sascha Vollmer
>
>
> Member of United Internet
>
> Diese E-Mail kann vertrauliche und/oder gesetzlich geschützte
> Informationen enthalten. Wenn Sie nicht der bestimmungsgemäße Adressat
> sind oder diese E-Mail irrtümlich erhalten haben, unterrichten Sie
> bitte den Absender und vernichten Sie diese E-Mail. Anderen als dem
> bestimmungsgemäßen Adressaten ist untersagt, diese E-Mail zu speichern,
> weiterzuleiten oder ihren Inhalt auf welche Weise auch immer zu
> verwenden.
>
> This e-mail may contain confidential and/or privileged information. If
> you are not the intended recipient of this e-mail, you are hereby
> notified that saving, distribution or use of the content of this e-mail
> in any way is prohibited. If you have received this e-mail in error,
> please notify the sender and delete the e-mail.
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux