On Wed, Jan 23, 2019 at 8:25 AM Jan Fajerski <jfajerski@xxxxxxxx> wrote: > > On Wed, Jan 23, 2019 at 10:01:05AM +0100, Manuel Lausch wrote: > >Hi, > > > >thats a bad news. > > > >round about 5000 OSDs are affected from this issue. It's not realy a > >solution to redeploy this OSDs. > > > >Is it possible to migrate the local keys to the monitors? > >I see that the OSDs with the "lockbox feature" has only one key for > >data and journal partition and the older OSDs have individual keys for > >journal and data. Might this be a problem? > > > >And a other question. > >Is it a good idea to mix ceph-disk and ceph-volume managed OSDSs on one > >host? > >So I could only migrate newer OSDs to ceph-volume and deploy new > >ones (after disk replacements) with ceph-volume until hopefuly there is > >a solution. > I might be wrong on this, since its been a while since I played with that. But > iirc you can't migrate a subset of ceph-disk OSDs to ceph-volume on one host. > Once you run ceph-volume simple activate, the ceph-disk systemd units and udev > profiles will be disabled. While the remaining ceph-disk OSDs will continue to > run, they won't come up after a reboot. This is correct, once you "activate" ceph-disk OSDs via ceph-volume you are disabling all udev/systemd triggers for those OSDs, so you must migrate all. I was assuming the question was more of a way to keep existing ceph-disk OSDs and create new ceph-volume OSDs, which you can, as long as this is not Nautilus or newer where ceph-disk doesn't exist > I'm sure there's a way to get them running again, but I imagine you'd rather not > manually deal with that. > > > >Regards > >Manuel > > > > > >On Tue, 22 Jan 2019 07:44:02 -0500 > >Alfredo Deza <adeza@xxxxxxxxxx> wrote: > > > > > >> This is one case we didn't anticipate :/ We supported the wonky > >> lockbox setup and thought we wouldn't need to go further back, > >> although we did add support for both > >> plain and luks keys. > >> > >> Looking through the code, it is very tightly couple to > >> storing/retrieving keys from the monitors, and I don't know what > >> workarounds might be possible here other than throwing away the OSD > >> and deploying a new one (I take it this is not an option for you at > >> all) > >> > >> > >Manuel Lausch > > > >Systemadministrator > >Storage Services > > > >1&1 Mail & Media Development & Technology GmbH | Brauerstraße 48 | > >76135 Karlsruhe | Germany Phone: +49 721 91374-1847 > >E-Mail: manuel.lausch@xxxxxxxx | Web: www.1und1.de > > > >Hauptsitz Montabaur, Amtsgericht Montabaur, HRB 5452 > > > >Geschäftsführer: Thomas Ludwig, Jan Oetjen, Sascha Vollmer > > > > > >Member of United Internet > > > >Diese E-Mail kann vertrauliche und/oder gesetzlich geschützte > >Informationen enthalten. Wenn Sie nicht der bestimmungsgemäße Adressat > >sind oder diese E-Mail irrtümlich erhalten haben, unterrichten Sie > >bitte den Absender und vernichten Sie diese E-Mail. Anderen als dem > >bestimmungsgemäßen Adressaten ist untersagt, diese E-Mail zu speichern, > >weiterzuleiten oder ihren Inhalt auf welche Weise auch immer zu > >verwenden. > > > >This e-mail may contain confidential and/or privileged information. If > >you are not the intended recipient of this e-mail, you are hereby > >notified that saving, distribution or use of the content of this e-mail > >in any way is prohibited. If you have received this e-mail in error, > >please notify the sender and delete the e-mail. > >_______________________________________________ > >ceph-users mailing list > >ceph-users@xxxxxxxxxxxxxx > >http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > -- > Jan Fajerski > Engineer Enterprise Storage > SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, > HRB 21284 (AG Nürnberg) > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com