Re: Ceph-deploy won't write journal if partition exists and using -- dmcrypt

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Here is the output I am getting when I try ceph-deploy::

lacadmin@kh10-9:~/GDC$ ceph-deploy osd --dmcrypt create kh10-7:sde:/dev/sdab1 [ceph_deploy.conf][DEBUG ] found configuration file at: /home/lacadmin/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (1.5.21): /usr/local/bin/ceph-deploy osd --dmcrypt create kh10-7:sde:/dev/sdab1 [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks kh10-7:/dev/sde:/dev/sdab1
[kh10-7][DEBUG ] connection detected need for sudo
[kh10-7][DEBUG ] connected to host: kh10-7
[kh10-7][DEBUG ] detect platform information from remote host
[kh10-7][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] Deploying osd to kh10-7
[kh10-7][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[kh10-7][INFO ] Running command: sudo udevadm trigger --subsystem-match=block --action=add [ceph_deploy.osd][DEBUG ] Preparing host kh10-7 disk /dev/sde journal /dev/sdab1 activate True [kh10-7][INFO ] Running command: sudo ceph-disk -v prepare --fs-type xfs --dmcrypt --dmcrypt-key-dir /etc/ceph/dmcrypt-keys --cluster ceph -- /dev/sde /dev/sdab1 [kh10-7][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid [kh10-7][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs [kh10-7][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs [kh10-7][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs [kh10-7][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs [kh10-7][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size [kh10-7][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters [kh10-7][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size [kh10-7][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_type [kh10-7][WARNIN] ceph-disk: Error: /dev/sdab1 partition already exists and --dmcrypt specified
[kh10-7][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk -v prepare --fs-type xfs --dmcrypt --dmcrypt-key-dir /etc/ceph/dmcrypt-keys --cluster ceph -- /dev/sde /dev/sdab1
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs

-----------------------------------------------------

On the host however we see that ceph-osd 630 was prepared and a mapped device was created::
sde
└─sde1 crypto_LUKS
└─c0008f35-02b1-4ab9-b15b-27c7bc7e3f49 (dm-29) xfs /var/lib/ceph/osd/ceph-630

I tried setting reus_journal to true and that this



On 07/16/2015 03:16 PM, Sean Sullivan wrote:
Some context. I have a small cluster running ubuntu 14.04 and giant ( now hsmmer). I ran some updates everything was fine. Rebooted a node and a drive must have failed as it no longer shows up.

I use --dmcrypt with ceph deploy and 5 osds per ssd journal. To do this I created the ssd partitions already and pointed ceph-deploy towards the partition for the journal.

This worked in giant without issue (I was able to zap the osd and redeploy using the same journal all of the time). Now it seems to fail in hammer stating that the partition exists and im using - - decrypt.

This raises a few questions.

1.) ceph osd start scripts must have a list of dm-crypt keys and uuids somewhere as the init mounts the drives. Is this accessible? Normally outside of ceph I've used crypt tab, how is ceph doing it?

2.) my ceph-deploy line is:
ceph-deploy osd --dmcrypt create ${host}:/dev/drive:/dev/journal_partition

I see that a variable in ceph-disk exists in and is set to false. Is this what I would need to change to get this working again? Or is this set to false for a reason?

3.) I see multiple references to journal_uuid in Sebastian Hans blog as well as the mailing list when replacing a disk. I don't have this file, and I'm assuming it's due to the - - dmcrypt flag. I also see 60 dmcrypt-keys in /etc/ceph/dmxrypt-keys but only 30 mapped devices. Are the journals not using these keys at all?





_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux