On Tue, Aug 13, 2013 at 3:21 AM, Pavel Timoschenkov <Pavel@xxxxxxxxxxxxxxxxxxxxxxxxxxx> wrote:
Hi.
Yes, i'm zapped all disks before.
More about my situation:
sdaa - one of disk for data: 3 TB with GPT partition table.
sda - ssd drive with manual created partitions (10 GB) for journal with MBR partition table.
===================================
fdisk -l /dev/sda
Disk /dev/sda: 480.1 GB, 480103981056 bytes
255 heads, 63 sectors/track, 58369 cylinders, total 937703088 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00033624
Device Boot Start End Blocks Id System
/dev/sda1 2048 19531775 9764864 83 Linux
/dev/sda2 19531776 39061503 9764864 83 Linux
/dev/sda3 39061504 58593279 9765888 83 Linux
/dev/sda4 78125056 97656831 9765888 83 Linux
===================================
If i'm executed ceph-deploy osd prepare without "journal" options - it's ok:
ceph@ceph-admin:~$ ceph-deploy disk zap ceph001:sdaa ceph001:sda1
[ceph_deploy.osd][DEBUG ] zapping /dev/sdaa on ceph001
[ceph_deploy.osd][DEBUG ] zapping /dev/sda1 on ceph001
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph001:/dev/sdaa:
ceph@ceph-admin:~$ ceph-deploy osd prepare ceph001:sdaa
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph001
[ceph_deploy.osd][DEBUG ] Host ceph001 is now ready for osd use.
[ceph_deploy.osd][DEBUG ] Preparing host ceph001 disk /dev/sdaa journal None activate False
root@ceph001:~# gdisk -l /dev/sdaa
GPT fdisk (gdisk) version 0.8.1
Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present
Found valid GPT with protective MBR; using GPT.
Disk /dev/sdaa: 5860533168 sectors, 2.7 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): 575ACF17-756D-47EC-828B-2E0A0B8ED757
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 5860533134
Partitions will be aligned on 2048-sector boundaries
Total free space is 4061 sectors (2.0 MiB)
Number Start (sector) End (sector) Size Code Name
1 2099200 5860533134 2.7 TiB FFFF ceph data
2 2048 2097152 1023.0 MiB FFFF ceph journal
Problems start, when i'm try create journal on separate drive:
ceph@ceph-admin:~$ ceph-deploy disk zap ceph001:sdaa ceph001:sda1
[ceph_deploy.osd][DEBUG ] zapping /dev/sdaa on ceph001
[ceph_deploy.osd][DEBUG ] zapping /dev/sda1 on ceph001
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph001:/dev/sdaa:/dev/sda1
ceph@ceph-admin:~$ ceph-deploy osd prepare ceph001:sdaa:sda1
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph001
[ceph_deploy.osd][DEBUG ] Host ceph001 is now ready for osd use.
[ceph_deploy.osd][DEBUG ] Preparing host ceph001 disk /dev/sdaa journal /dev/sda1 activate False
[ceph_deploy.osd][ERROR ] ceph-disk-prepare -- /dev/sdaa /dev/sda1 returned 1
Information: Moved requested sector from 34 to 2048 inceph-disk: Mounting filesystem failed: Command '['mount', '-o', 'noatime', '--', '/dev/sdaa1', '/var/lib/ceph/tmp/mnt.fZQxiz']' returned non-zero exit status 32
order to align on 2048-sector boundaries.
The operation has completed successfully.
meta-data="" isize=2048 agcount=32, agsize=22892700 blks
= sectsz=512 attr=2, projid32bit=0
data = bsize=4096 blocks=732566385, imaxpct=5
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal log bsize=4096 blocks=357698, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same device as the osd data
mount: /dev/sdaa1: more filesystems detected. This should not happen,
use -t <type> to explicitly specify the filesystem type or
use wipefs(8) to clean up the device.
mount: you must specify the filesystem type
ceph-deploy: Failed to create 1 OSDs
It looks like at some point the filesystem is not passed to the
options. Would you mind running the `ceph-disk-prepare` command again
but with
the --verbose flag?
I think that from the output above (correct it if I am mistaken) that would be something like:
ceph-disk-prepare --verbose -- /dev/sdaa /dev/sda1
And paste the results back so we can take a look? ceph-disk-prepare --verbose -- /dev/sdaa /dev/sda1
-----Original Message-----
From: Samuel Just [mailto:sam.just@xxxxxxxxxxx]
Sent: Monday, August 12, 2013 11:39 PM
To: Pavel Timoschenkov
Cc: ceph-users@xxxxxxxx
Subject: Re: ceph-deploy and journal on separate disk
Did you try using ceph-deploy disk zap ceph001:sdaa first?
-Sam
On Mon, Aug 12, 2013 at 6:21 AM, Pavel Timoschenkov <Pavel@xxxxxxxxxxxxxxxxxxxxxxxxxxx> wrote:
> Hi.
>
> I have some problems with create journal on separate disk, using
> ceph-deploy osd prepare command.
>
> When I try execute next command:
>
> ceph-deploy osd prepare ceph001:sdaa:sda1
>
> where:
>
> sdaa - disk for ceph data
>
> sda1 - partition on ssd drive for journal
>
> I get next errors:
>
> ======================================================================
> ==================================
>
> ceph@ceph-admin:~$ ceph-deploy osd prepare ceph001:sdaa:sda1
>
> ceph-disk-prepare -- /dev/sdaa /dev/sda1 returned 1
>
> Information: Moved requested sector from 34 to 2048 in
>
> order to align on 2048-sector boundaries.
>
> The operation has completed successfully.
>
> meta-data="" isize=2048 agcount=32, agsize=22892700
> blks
>
> = sectsz=512 attr=2, projid32bit=0
>
> data = bsize=4096 blocks=732566385, imaxpct=5
>
> = sunit=0 swidth=0 blks
>
> naming =version 2 bsize=4096 ascii-ci=0
>
> log =internal log bsize=4096 blocks=357698, version=2
>
> = sectsz=512 sunit=0 blks, lazy-count=1
>
> realtime =none extsz=4096 blocks=0, rtextents=0
>
>
>
> WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the
> same device as the osd data
>
> mount: /dev/sdaa1: more filesystems detected. This should not happen,
>
> use -t <type> to explicitly specify the filesystem type or
>
> use wipefs(8) to clean up the device.
>
>
>
> mount: you must specify the filesystem type
>
> ceph-disk: Mounting filesystem failed: Command '['mount', '-o',
> 'noatime', '--', '/dev/sdaa1', '/var/lib/ceph/tmp/mnt.ek6mog']'
> returned non-zero exit status 32
>
>
>
> Someone had a similar problem?
>
> Thanks for the help
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com