I am trying to deploy ceph 94.5 (hammer) across a few nodes using
ceph-deploy and passing the --dmcrypt flag. The first osd:journal pair
seems to succeed but all remaining osds that have a journal on the same
ssd seem to silently fail::
http://pastebin.com/2TGG4tq4
In the end I end up with 5 OSDs running per host (1 osd per ssd journal).
The partition on the SSD for the 2nd journal seems to get created but
the luks device is never created and as such the
/var/lib/ceph/osd/ceph-1 osd is never created either. The current
workaround that I have is to manually encrypt the drives using luks and
then point ceph-disk at the encrpyted devices vs the raw blocks but RHCS
says this is not supported. Is there an accepted workaround for this?
I have also tried specifying the partition (/dev/sdaa2) vs the raw block
device (/dev/sdaa) and then ceph-deploy bombs out saying the partition
already exists. Is there a ceph.conf option i need to add to be able to
specify partitions as journals? Has anyone used ceph-deploy to deploy
hammer onto ubuntu 14.04 using spinnys as osds and ssds as journals in
the past few weeks?
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com