Re: Can't activate OSD with journal and data on the same disk

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I made a ticket for this: http://tracker.ceph.com/issues/6740
Thanks for the bug report!
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com


On Fri, Nov 8, 2013 at 1:51 AM, Michael Lukzak <miszko@xxxxx> wrote:
> Hi,
>
> News. I tried activate disk without --dmcrypt and there is no problem. After
> activate on sdb are two partitions (sdb2 for jounral and sdb1 for data).
>
> In my opinion there is a bug with switch --dmcrypt and activating
> journal on disk (partitions are created, but mounting done by ceph-disk fail).
>
> Here are logs without --dmcrypt
>
> root@ceph-deploy:~/ceph# ceph-deploy osd prepare ceph-node0:/dev/sdb
> [ceph_deploy.cli][INFO  ] Invoked (1.3.1): /usr/bin/ceph-deploy osd prepare ceph-node0:/dev/sdb
> [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph-node0:/dev/sdb:
> [ceph-node0][DEBUG ] connected to host: ceph-node0
> [ceph-node0][DEBUG ] detect platform information from remote host
> [ceph-node0][DEBUG ] detect machine type
> [ceph_deploy.osd][INFO  ] Distro info: Ubuntu 13.04 raring
> [ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node0
> [ceph-node0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
> [ceph-node0][INFO  ] Running command: udevadm trigger --subsystem-match=block --action=add
> [ceph_deploy.osd][DEBUG ] Preparing host ceph-node0 disk /dev/sdb journal None activate False
> [ceph-node0][INFO  ] Running command: ceph-disk-prepare --fs-type xfs --cluster ceph -- /dev/sdb
> [ceph-node0][ERROR ] INFO:ceph-disk:Will colocate journal with data on /dev/sdb
> [ceph-node0][DEBUG ] Information: Moved requested sector from 34 to 2048 in
> [ceph-node0][DEBUG ] order to align on 2048-sector boundaries.
> [ceph-node0][DEBUG ] The operation has completed successfully.
> [ceph-node0][DEBUG ] Information: Moved requested sector from 2097153 to 2099200 in
> [ceph-node0][DEBUG ] order to align on 2048-sector boundaries.
> [ceph-node0][DEBUG ] The operation has completed successfully.
> [ceph-node0][DEBUG ] meta-data=/dev/sdb1              isize=2048   agcount=4, agsize=917439 blks
> [ceph-node0][DEBUG ]          =                       sectsz=512   attr=2, projid32bit=0
> [ceph-node0][DEBUG ] data     =                       bsize=4096   blocks=3669755, imaxpct=25
> [ceph-node0][DEBUG ]          =                       sunit=0      swidth=0 blks
> [ceph-node0][DEBUG ] naming   =version 2              bsize=4096   ascii-ci=0
> [ceph-node0][DEBUG ] log      =internal log           bsize=4096   blocks=2560, version=2
> [ceph-node0][DEBUG ]          =                       sectsz=512   sunit=0 blks, lazy-count=1
> [ceph-node0][DEBUG ] realtime =none                   extsz=4096   blocks=0, rtextents=0
> [ceph-node0][DEBUG ] The operation has completed successfully.
> [ceph_deploy.osd][DEBUG ] Host ceph-node0 is now ready for osd use.
>
> Disk are properly activated. With --dmcrypt journal partition are not
> propery mounted and ceph-disk cannot use it.
>
> Best Regards,
> Michael
>
>
>>
>>  Hi!
>>
>>  I have a question about activating OSD on whole disk. I can't bypass this issue.
>>  Conf spec: 8 VMs - ceph-deploy; ceph-admin; ceph-mon0-2 and ceph-node0-2;
>>
>>  I started from creating MON - all good .
>>  After that I want to prepare and activate 3x OSD with dm-crypt.
>>
>>  So I put on ceph.conf this
>>
>>  [osd.0]
>>          host = ceph-node0
>>          cluster addr = 10.0.0.75:6800
>>          public addr = 10.0.0.75:6801
>>          devs = /dev/sdb
>>
>>  Next I use ceph-deploy to activate a OSD and this shows
>>
>>  root@ceph-deploy:~/ceph# ceph-deploy osd prepare ceph-node0:/dev/sdb --dmcrypt
>>  [ceph_deploy.cli][INFO  ] Invoked (1.3.1): /usr/bin/ceph-deploy
>> osd prepare ceph-node0:/dev/sdb --dmcrypt
>>  [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph-node0:/dev/sdb:
>>  [ceph-node0][DEBUG ] connected to host: ceph-node0
>>  [ceph-node0][DEBUG ] detect platform information from remote host
>>  [ceph-node0][DEBUG ] detect machine type
>>  [ceph_deploy.osd][INFO  ] Distro info: Ubuntu 13.04 raring
>>  [ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node0
>>  [ceph-node0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
>>  [ceph-node0][INFO  ] Running command: udevadm trigger
>> --subsystem-match=block --action=add
>>  [ceph_deploy.osd][DEBUG ] Preparing host ceph-node0 disk /dev/sdb journal None activate False
>>  [ceph-node0][INFO  ] Running command: ceph-disk-prepare --fs-type
>> xfs --dmcrypt --dmcrypt-key-dir /etc/ceph/dmcrypt-keys --cluster ceph -- /dev/sdb
>>  [ceph-node0][ERROR ] INFO:ceph-disk:Will colocate journal with data on /dev/sdb
>>  [ceph-node0][ERROR ] ceph-disk: Error: partition 1 for /dev/sdb does not appear to exist
>>  [ceph-node0][DEBUG ] Information: Moved requested sector from 34 to 2048 in
>>  [ceph-node0][DEBUG ] order to align on 2048-sector boundaries.
>>  [ceph-node0][DEBUG ] The operation has completed successfully.
>>  [ceph-node0][DEBUG ] Information: Moved requested sector from 2097153 to 2099200 in
>>  [ceph-node0][DEBUG ] order to align on 2048-sector boundaries.
>>  [ceph-node0][DEBUG ] Warning: The kernel is still using the old partition table.
>>  [ceph-node0][DEBUG ] The new table will be used at the next reboot.
>>  [ceph-node0][DEBUG ] The operation has completed successfully.
>>  [ceph-node0][ERROR ] Traceback (most recent call last):
>>  [ceph-node0][ERROR ]   File
>> "/usr/lib/python2.7/dist-packages/ceph_deploy/lib/remoto/process.py", line 68, in run
>>  [ceph-node0][ERROR ]     reporting(conn, result, timeout)
>>  [ceph-node0][ERROR ]   File
>> "/usr/lib/python2.7/dist-packages/ceph_deploy/lib/remoto/log.py", line 13, in reporting
>>  [ceph-node0][ERROR ]     received = result.receive(timeout)
>>  [ceph-node0][ERROR ]   File
>> "/usr/lib/python2.7/dist-packages/ceph_deploy/lib/remoto/lib/execnet/gateway_base.py", line 455, in receive
>>  [ceph-node0][ERROR ]     raise self._getremoteerror() or EOFError()
>>  [ceph-node0][ERROR ] RemoteError: Traceback (most recent call last):
>>  [ceph-node0][ERROR ]   File "<string>", line 806, in executetask
>>  [ceph-node0][ERROR ]   File "", line 35, in _remote_run
>>  [ceph-node0][ERROR ] RuntimeError: command returned non-zero exit status: 1
>>  [ceph-node0][ERROR ]
>>  [ceph-node0][ERROR ]
>>  [ceph_deploy.osd][ERROR ] Failed to execute command:
>> ceph-disk-prepare --fs-type xfs --dmcrypt --dmcrypt-key-dir
>> /etc/ceph/dmcrypt-keys --cluster ceph -- /dev/sdb
>>  [ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs
>>
>>
>>  It's looks like ceph-disk-prepare can't mount (activate?) the one of disk.
>>  So I go to ceph-node0 and listed disk, this shows:
>>
>>  root@ceph-node0:~# ls /dev/sd
>>  sda   sda1  sda2  sda5  sdb   sdb2
>>
>>  Ups - there are no sdb1.
>>
>>  So I printed all partitions on /dev/sdb and there is two:
>>
>>  Number  Beg                  End  Size  Filesystem  Name         Flags
>>   2     1049kB    1074MB  1073MB                  ceph journal
>>   1     1075MB    16,1GB  15,0GB                  ceph data
>>
>>  Where sdb1 should be for data and sdb2 for journal.
>>
>>  When I restart the VM /dev/sdb1 start showing.
>>  root@ceph-node0:~# ls /dev/sd
>>  sda   sda1  sda2  sda5  sdb   sdb1   sdb2
>>  But I cant mount
>>
>>  When I put journal to separate file/disk, there is no problem with activating
>>  (journal are on separate disk, and all partition data are on sdb1).
>>  There is log from this acction (I put journal to file in /mnt/sdb2)
>>
>>  root@ceph-deploy:~/ceph# ceph-deploy osd prepare
>> ceph-node0:/dev/sdb:/mnt/sdb2 --dmcrypt
>>  [ceph_deploy.cli][INFO  ] Invoked (1.3.1): /usr/bin/ceph-deploy
>> osd prepare ceph-node0:/dev/sdb:/mnt/sdb2 --dmcrypt
>>  [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph-node0:/dev/sdb:/mnt/sdb2
>>  [ceph-node0][DEBUG ] connected to host: ceph-node0
>>  [ceph-node0][DEBUG ] detect platform information from remote host
>>  [ceph-node0][DEBUG ] detect machine type
>>  [ceph_deploy.osd][INFO  ] Distro info: Ubuntu 13.04 raring
>>  [ceph_deploy.osd][DEBUG ] Deploying osd to ceph-node0
>>  [ceph-node0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
>>  [ceph-node0][INFO  ] Running command: udevadm trigger
>> --subsystem-match=block --action=add
>>  [ceph_deploy.osd][DEBUG ] Preparing host ceph-node0 disk /dev/sdb journal /mnt/sdb2 activate False
>>  [ceph-node0][INFO  ] Running command: ceph-disk-prepare --fs-type
>> xfs --dmcrypt --dmcrypt-key-dir /etc/ceph/dmcrypt-keys --cluster ceph -- /dev/sdb /mnt/sdb2
>>  [ceph-node0][ERROR ] WARNING:ceph-disk:OSD will not be
>> hot-swappable if journal is not the same device as the osd data
>>  [ceph-node0][DEBUG ] Information: Moved requested sector from 34 to 2048 in
>>  [ceph-node0][DEBUG ] order to align on 2048-sector boundaries.
>>  [ceph-node0][DEBUG ] The operation has completed successfully.
>>  [ceph-node0][DEBUG ]
>> meta-data=/dev/mapper/299e07ff-31bf-49c4-a8de-62a8e4203c04
>> isize=2048   agcount=4, agsize=982975 blks
>>  [ceph-node0][DEBUG ]          =                       sectsz=512   attr=2, projid32bit=0
>>  [ceph-node0][DEBUG ] data     =                       bsize=4096   blocks=3931899, imaxpct=25
>>  [ceph-node0][DEBUG ]          =                       sunit=0      swidth=0 blks
>>  [ceph-node0][DEBUG ] naming   =version 2              bsize=4096   ascii-ci=0
>>  [ceph-node0][DEBUG ] log      =internal log           bsize=4096   blocks=2560, version=2
>>  [ceph-node0][DEBUG ]          =                       sectsz=512   sunit=0 blks, lazy-count=1
>>  [ceph-node0][DEBUG ] realtime =none                   extsz=4096   blocks=0, rtextents=0
>>  [ceph-node0][DEBUG ] The operation has completed successfully.
>>  [ceph_deploy.osd][DEBUG ] Host ceph-node0 is now ready for osd use.
>>
>>  Listed partitions (there are only one)
>>  Number  Beg                  End  Size  Filesystem  Name         Flags
>>   1     1049kB    16,1GB  16,1GB                  ceph data
>>
>>  Keys are properly stored in /etc/ceph/dmcrypt-keys/ and sdb1 are
>> mounted to /var/lib/ceph/osd/ceph-1
>>  Ceph start showing that this OSD are in cluster. Yupi. But this is no solution to me ;)
>>
>>  So the final question is - Where I'm wrong? Why I can't activate journal on the same disk.
>>
>>  --
>>  Best Regards,
>>   Michael Lukzak
>
>
> --
> Pozdrowienia,
>  Michael Lukzak
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux