German
Flushing a GPT partition table using dd does not work as the table is duplicated at the end of the disk as well
Use the sgdisk –Z command
Paul
From: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> on behalf of Mykola <mykola.dvornik@xxxxxxxxx>
Date: Thursday, 19 November 2015 at 18:43
To: German Anders <ganders@xxxxxxxxxxxx>
Cc: ceph-users <ceph-users@xxxxxxxxxxxxxx>
Subject: Re: ceph osd prepare cmd on infernalis 9.2.0
I believe the error message says that there is no space left on the device for the second partition to be created. Perhaps try to flush gpt with old good dd.
Sent from Outlook Mail for Windows 10 phone
From: German Anders
Sent: Thursday, November 19, 2015 7:25 PM
To: Mykola Dvornik
Cc: ceph-users
Subject: Re: ceph osd prepare cmd on infernalis 9.2.0
I've already try that with no luck at all
On Thursday, 19 November 2015, Mykola Dvornik <mykola.dvornik@xxxxxxxxx> wrote:'Could not create partition 2 from 10485761 to 10485760'.
Perhaps try to zap the disks first?
On 19 November 2015 at 16:22, German Anders <ganders@xxxxxxxxxxxx> wrote:
Hi cephers,
I had some issues while running the prepare osd command:
ceph version: infernalis 9.2.0
disk: /dev/sdf (745.2G)
/dev/sdf1 740.2G
/dev/sdf2 5G
# parted /dev/sdf
GNU Parted 2.3
Using /dev/sdf
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print
Model: ATA INTEL SSDSC2BB80 (scsi)
Disk /dev/sdf: 800GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Number Start End Size File system Name Flags
2 1049kB 5369MB 5368MB ceph journal
1 5370MB 800GB 795GB btrfs ceph data
cibn05:
$ ceph-deploy osd prepare --fs-type btrfs cibn05:sdf
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.28): /usr/local/bin/ceph-deploy osd prepare --fs-type btrfs cibn05:sdf
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] disk : [('cibn05', '/dev/sdf', None)]
[ceph_deploy.cli][INFO ] dmcrypt : False
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : prepare
[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fbb1df85830>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] fs_type : btrfs
[ceph_deploy.cli][INFO ] func : <function osd at 0x7fbb1e1d9050>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] zap_disk : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks cibn05:/dev/sdf:
[cibn05][DEBUG ] connection detected need for sudo
[cibn05][DEBUG ] connected to host: cibn05
[cibn05][DEBUG ] detect platform information from remote host
[cibn05][DEBUG ] detect machine type
[cibn05][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] Deploying osd to cibn05
[cibn05][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[cibn05][INFO ] Running command: sudo udevadm trigger --subsystem-match=block --action=""> [ceph_deploy.osd][DEBUG ] Preparing host cibn05 disk /dev/sdf journal None activate False
[cibn05][INFO ] Running command: sudo ceph-disk -v prepare --cluster ceph --fs-type btrfs -- /dev/sdf
[cibn05][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --cluster ceph
[cibn05][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --cluster ceph
[cibn05][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --cluster ceph
[cibn05][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdf uuid path is /sys/dev/block/8:80/dm/uuid
[cibn05][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdf uuid path is /sys/dev/block/8:80/dm/uuid
[cibn05][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdf uuid path is /sys/dev/block/8:80/dm/uuid
[cibn05][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdf1 uuid path is /sys/dev/block/8:81/dm/uuid
[cibn05][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdf2 uuid path is /sys/dev/block/8:82/dm/uuid
[cibn05][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[cibn05][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_btrfs
[cibn05][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_btrfs
[cibn05][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_btrfs
[cibn05][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_btrfs
[cibn05][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[cibn05][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters
[cibn05][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size
[cibn05][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_type
[cibn05][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdf uuid path is /sys/dev/block/8:80/dm/uuid
[cibn05][WARNIN] INFO:ceph-disk:Will colocate journal with data on /dev/sdf
[cibn05][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdf uuid path is /sys/dev/block/8:80/dm/uuid
[cibn05][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdf uuid path is /sys/dev/block/8:80/dm/uuid
[cibn05][WARNIN] DEBUG:ceph-disk:Creating journal partition num 2 size 5120 on /dev/sdf
[cibn05][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk --new=2:0:5120M --change-name=2:ceph journal --partition-guid=2:6a9a83f1-2196-4833-a4c8-8f3a424de54f --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sdf
[cibn05][WARNIN] Could not create partition 2 from 10485761 to 10485760
[cibn05][WARNIN] Error encountered; not saving changes.
[cibn05][WARNIN] Traceback (most recent call last):
[cibn05][WARNIN] File "/usr/sbin/ceph-disk", line 3576, in <module>
[cibn05][WARNIN] main(sys.argv[1:])
[cibn05][WARNIN] File "/usr/sbin/ceph-disk", line 3530, in main
[cibn05][WARNIN] args.func(args)
[cibn05][WARNIN] File "/usr/sbin/ceph-disk", line 1863, in main_prepare
[cibn05][WARNIN] luks=luks
[cibn05][WARNIN] File "/usr/sbin/ceph-disk", line 1465, in prepare_journal
[cibn05][WARNIN] return prepare_journal_dev(data, journal, journal_size, journal_uuid, journal_dm_keypath, cryptsetup_parameters, luks)
[cibn05][WARNIN] File "/usr/sbin/ceph-disk", line 1419, in prepare_journal_dev
[cibn05][WARNIN] raise Error(e)
[cibn05][WARNIN] __main__.Error: Error: Command '['/sbin/sgdisk', '--new=2:0:5120M', '--change-name=2:ceph journal', '--partition-guid=2:6a9a83f1-2196-4833-a4c8-8f3a424de54f', '--typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106', '--mbrtogpt', '--', '/dev/sdf']' returned non-zero exit status 4
[cibn05][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk -v prepare --cluster ceph --fs-type btrfs -- /dev/sdf
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDsany ideas?
Thanks in advance,
German
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Mykola
--German Anders
Storage Engineer Manager
Despegar | IT Team
office +54 11 4894 3500 x3408
mobile +54 911 3493 7262
mail ganders@xxxxxxxxxxxx
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com