Re: Mounting with dmcrypt still fails

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Title: Re: Mounting with dmcrypt still fails
Hi again,

I used another host for osd (with that same name), but now with Debian 7.4

ceph-deploy osd prepare ceph-node0:sdb --dmcrypt

[ceph_deploy.cli][INFO  ] Invoked (1.3.5): /usr/bin/ceph-deploy osd prepare ceph-node0:sdb --dmcrypt
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph-node0:/dev/sdb:
[ceph-node0][DEBUG ] connected to host: ceph-node0
[ceph-node0][DEBUG ] detect platform information from remote host
[ceph-node0][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: debian 7.4 wheezy
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node0
[ceph-node0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-node0][WARNIN] osd keyring does not exist yet, creating one
[ceph-node0][DEBUG ] create a keyring file
[ceph-node0][INFO  ] Running command: udevadm trigger --subsystem-match=block --action=""> [ceph_deploy.osd][DEBUG ] Preparing host ceph-node0 disk /dev/sdb journal None activate False
[ceph-node0][INFO  ] Running command: ceph-disk-prepare --fs-type xfs --dmcrypt --dmcrypt-key-dir /etc/ceph/dmcrypt-keys --cluster ceph -- /dev/sdb
[ceph-node0][WARNIN] INFO:ceph-disk:Will colocate journal with data on /dev/sdb
[ceph-node0][WARNIN] ceph-disk: Error: partition 1 for /dev/sdb does not appear to exist
[ceph-node0][DEBUG ] Information: Moved requested sector from 34 to 2048 in
[ceph-node0][DEBUG ] order to align on 2048-sector boundaries.
[ceph-node0][DEBUG ] The operation has completed successfully.
[ceph-node0][DEBUG ] Information: Moved requested sector from 10485761 to 10487808 in
[ceph-node0][DEBUG ] order to align on 2048-sector boundaries.
[ceph-node0][DEBUG ] Warning: The kernel is still using the old partition table.
[ceph-node0][DEBUG ] The new table will be used at the next reboot.
[ceph-node0][DEBUG ] The operation has completed successfully.
[ceph-node0][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk-prepare --fs-type xfs --dmcrypt --dmcrypt-key-dir /etc/ceph/dmcrypt-keys --cluster ceph -- /dev/sdb
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs


This subcommand fails on OSD host -> ceph-disk-prepare

I run this command on ceph-node0, and...

ceph-disk-prepare --fs-type xfs --dmcrypt --dmcrypt-key-dir /etc/ceph/dmcrypt-keys --cluster ceph -- /dev/sdb
ceph-disk: Error: Device /dev/sdb2 is in use by a device-mapper mapping (dm-crypt?): dm-0

I can reply this error with 100% guarantee.
I used Debian 7.4 and Ubuntu 13.04 for test.

Best Regards,
Michael Lukzak


Hi,

I tried to use whole new blank disk to create two separate partition (one for data and second for journal)
and use dmcrypt, but there is a problem with use this. It's looks like there is a problem with mounting or
formatting partitions.

OS is Ubuntu 13.04 with ceph v0.72 (emperor)

I used command:

ceph-deploy osd prepare ceph-node0:sdb --dmcrypt --dmcrypt-key-dir=/root --fs-type=xfs

[ceph-node0][WARNIN] INFO:ceph-disk:Will colocate journal with data on /dev/sdb
[ceph-node0][DEBUG ] Creating new GPT entries.
[ceph-node0][DEBUG ] Information: Moved requested sector from 34 to 2048 in
[ceph-node0][DEBUG ] order to align on 2048-sector boundaries.
[ceph-node0][DEBUG ] The operation has completed successfully.
[ceph-node0][DEBUG ] Information: Moved requested sector from 10485761 to 10487808 in
[ceph-node0][DEBUG ] order to align on 2048-sector boundaries.
[ceph-node0][DEBUG ] The operation has completed successfully.
[ceph-node0][DEBUG ] meta-data="" isize=2048   agcount=4, agsize=720831 blks
[ceph-node0][DEBUG ]          =                       sectsz=512   attr=2, projid32bit=0
[ceph-node0][DEBUG ] data     =                       bsize=4096   blocks=2883323, imaxpct=25
[ceph-node0][DEBUG ]          =                       sunit=0      swidth=0 blks
[ceph-node0][DEBUG ] naming   =version 2              bsize=4096   ascii-ci=0
[ceph-node0][DEBUG ] log      =internal log           bsize=4096   blocks=2560, version=2
[ceph-node0][DEBUG ]          =                       sectsz=512   sunit=0 blks, lazy-count=1
[ceph-node0][DEBUG ] realtime =none                   extsz=4096   blocks=0, rtextents=0
[ceph-node0][DEBUG ] The operation has completed successfully.
[ceph_deploy.osd][DEBUG ] Host ceph-node0 is now ready for osd use.

Here it's look like all was good, but on the host ceph-node0 (where is disk sdb) is a problem.
Here are dump from syslog (at ceph-node0)

Mar 17 14:03:02 ceph-node0 kernel: [   68.645938] sd 2:0:1:0: [sdb] Cache data unavailable
Mar 17 14:03:02 ceph-node0 kernel: [   68.645943] sd 2:0:1:0: [sdb] Assuming drive cache: write through
Mar 17 14:03:02 ceph-node0 kernel: [   68.708930]  sdb: sdb1 sdb2
Mar 17 14:03:02 ceph-node0 kernel: [   68.996013] bio: create slab <bio-1> at 1
Mar 17 14:03:03 ceph-node0 kernel: [   69.613407] SGI XFS with ACLs, security attributes, realtime, large block/inode numbers, no debug enabled
Mar 17 14:03:03 ceph-node0 kernel: [   69.619904] XFS (dm-0): Mounting Filesystem
Mar 17 14:03:03 ceph-node0 kernel: [   69.658693] XFS (dm-0): Ending clean mount
Mar 17 14:03:04 ceph-node0 kernel: [   70.745337] sd 2:0:1:0: [sdb] Cache data unavailable
Mar 17 14:03:04 ceph-node0 kernel: [   70.745342] sd 2:0:1:0: [sdb] Assuming drive cache: write through
Mar 17 14:03:04 ceph-node0 kernel: [   70.750667]  sdb: sdb1 sdb2
Mar 17 14:04:05 ceph-node0 udevd[515]: timeout: killing '/bin/bash -c 'while [ ! -e /dev/mapper/d92421e6-27c3-498a-9754-b5c10a281500 ];do sleep 1; done'' [1903]
Mar 17 14:04:05 ceph-node0 udevd[515]: '/bin/bash -c 'while [ ! -e /dev/mapper/d92421e6-27c3-498a-9754-b5c10a281500 ];do sleep 1; done'' [1903] terminated by signal 9 (Killed)
Mar 17 14:05:07 ceph-node0 udevd[515]: timeout: killing '/bin/bash -c 'while [ ! -e /dev/mapper/d92421e6-27c3-498a-9754-b5c10a281500 ];do sleep 1; done'' [2215]
Mar 17 14:05:07 ceph-node0 udevd[515]: '/bin/bash -c 'while [ ! -e /dev/mapper/d92421e6-27c3-498a-9754-b5c10a281500 ];do sleep 1; done'' [2215] terminated by signal 9 (Killed)

Two partitions (sdb1 and sdb2) are created, but it looks like is a problem with mounting or formating it? I can't figure out.

parted show that sdb1 and sdb2 exists, but in collumn filesystem is empty
 2     1049kB    5369MB  5368MB                  ceph journal
 1     5370MB    17,2GB  11,8GB                          ceph data

Keys for dmcrypt are stored in /root

So lets try without switch --dmcrypt
ceph-deploy osd prepare ceph-node0:sdb --fs-type=xfs
[ceph_deploy.cli][INFO  ] Invoked (1.3.5): /usr/bin/ceph-deploy osd prepare ceph-node0:sdb --fs-type=xfs
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph-node0:/dev/sdb:
[ceph-node0][DEBUG ] connected to host: ceph-node0
[ceph-node0][DEBUG ] detect platform information from remote host
[ceph-node0][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 13.04 raring
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node0
[ceph-node0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-node0][INFO  ] Running command: udevadm trigger --subsystem-match=block --action=""> [ceph_deploy.osd][DEBUG ] Preparing host ceph-node0 disk /dev/sdb journal None activate False
[ceph-node0][INFO  ] Running command: ceph-disk-prepare --fs-type xfs --cluster ceph -- /dev/sdb
[ceph-node0][WARNIN] INFO:ceph-disk:Will colocate journal with data on /dev/sdb
[ceph-node0][DEBUG ] Information: Moved requested sector from 34 to 2048 in
[ceph-node0][DEBUG ] order to align on 2048-sector boundaries.
[ceph-node0][DEBUG ] The operation has completed successfully.
[ceph-node0][DEBUG ] Information: Moved requested sector from 10485761 to 10487808 in
[ceph-node0][DEBUG ] order to align on 2048-sector boundaries.
[ceph-node0][DEBUG ] The operation has completed successfully.
[ceph-node0][DEBUG ] meta-data=""              isize=2048   agcount=4, agsize=720831 blks
[ceph-node0][DEBUG ]          =                       sectsz=512   attr=2, projid32bit=0
[ceph-node0][DEBUG ] data     =                       bsize=4096   blocks=2883323, imaxpct=25
[ceph-node0][DEBUG ]          =                       sunit=0      swidth=0 blks
[ceph-node0][DEBUG ] naming   =version 2              bsize=4096   ascii-ci=0
[ceph-node0][DEBUG ] log      =internal log           bsize=4096   blocks=2560, version=2
[ceph-node0][DEBUG ]          =                       sectsz=512   sunit=0 blks, lazy-count=1
[ceph-node0][DEBUG ] realtime =none                   extsz=4096   blocks=0, rtextents=0
[ceph-node0][DEBUG ] The operation has completed successfully.
[ceph_deploy.osd][DEBUG ] Host ceph-node0 is now ready for osd use.


Two partition sdb1 and sdb2 are created and mounted properly on host ceph-node0.

Here is dump from syslog (at ceph-node0).
Mar 17 14:08:20 ceph-node0 kernel: [  385.330968] sd 2:0:1:0: [sdb] Cache data unavailable
Mar 17 14:08:20 ceph-node0 kernel: [  385.330973] sd 2:0:1:0: [sdb] Assuming drive cache: write through
Mar 17 14:08:20 ceph-node0 kernel: [  385.335410]  sdb: sdb2
Mar 17 14:08:21 ceph-node0 kernel: [  386.845878] sd 2:0:1:0: [sdb] Cache data unavailable
Mar 17 14:08:21 ceph-node0 kernel: [  386.845883] sd 2:0:1:0: [sdb] Assuming drive cache: write through
Mar 17 14:08:21 ceph-node0 kernel: [  386.851324]  sdb: sdb1 sdb2
Mar 17 14:08:22 ceph-node0 kernel: [  387.469774] XFS (sdb1): Mounting Filesystem
Mar 17 14:08:22 ceph-node0 kernel: [  387.492869] XFS (sdb1): Ending clean mount
Mar 17 14:08:23 ceph-node0 kernel: [  388.549737] sd 2:0:1:0: [sdb] Cache data unavailable
Mar 17 14:08:23 ceph-node0 kernel: [  388.549742] sd 2:0:1:0: [sdb] Assuming drive cache: write through
Mar 17 14:08:23 ceph-node0 kernel: [  388.564160]  sdb: sdb1 sdb2
Mar 17 14:08:23 ceph-node0 kernel: [  388.922841] XFS (sdb1): Mounting Filesystem
Mar 17 14:08:23 ceph-node0 kernel: [  388.974655] XFS (sdb1): Ending clean mount

And dump from parted (now at sdb1 we see that filesystem is present (xfs)).
 2     1049kB    5369MB  5368MB                  ceph journal
 1     5370MB    17,2GB  11,8GB   xfs            ceph data


Ceph show that new OSD was arrived, there is no problem to activate and use it.

Ok, so my question is - what problem is with dmcrypt?

-- 
Pozdrowienia,
 Michael Lukzak




-- 
Pozdrowienia,
 Michael Lukzak
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux