Re: ceph-deploy prepare btrfs osd error

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks a lot Simon, this helps me to resolved the issue, it was the bug that you mentioned.

Best regards,​

German

2015-09-07 5:34 GMT-03:00 Simon Hallam <sha@xxxxxxxxx>:

Hi German,

 

This is what I’m running to redo an OSD as btrfs (not sure if this is the exact error you’re getting):

 

DISK_LETTER=( a b c d e f g h i j k l )

 

i=0

 

for OSD_NUM in {12..23}; do

sudo /etc/init.d/ceph stop osd.${OSD_NUM}

sudo umount /var/lib/ceph/osd/ceph-${OSD_NUM}

sudo ceph auth del osd.${OSD_NUM}

sudo ceph osd crush remove osd.${OSD_NUM}

sudo ceph osd rm ${OSD_NUM}

 

# recreate again

sudo wipefs /dev/sd${DISK_LETTER[$i]}1

sudo dd if=/dev/zero of=/dev/sd${DISK_LETTER[$i]}1 bs=4k count=10000

sudo sgdisk --zap-all --clear -g /dev/sd${DISK_LETTER[$i]}

sudo kpartx -dug /dev/sd${DISK_LETTER[$i]}

sudo partprobe /dev/sd${DISK_LETTER[$i]}

sudo dd if=/dev/zero of=/dev/sd${DISK_LETTER[$i]} bs=4k count=10000

sudo ceph-disk zap /dev/sd${DISK_LETTER[$i]}

echo ""

echo "ceph-deploy --overwrite-conf disk prepare --fs-type btrfs ceph2:sd${DISK_LETTER[$i]}"

echo ""

read -p "Press [Enter] key to continue next disk after running the above command on ceph1"

i=$((i + 1))

done

 

There appears to be an issue with zap not wiping the partitions correctly. http://tracker.ceph.com/issues/6258

 

Yours seems slightly different though. Curious, what size disk are you trying to use?

 

Cheers,

 

Simon

 

From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of German Anders
Sent: 04 September 2015 19:53
To: ceph-users
Subject: ceph-deploy prepare btrfs osd error

 

Any ideas?

ceph@cephdeploy01:~/ceph-ib$ ceph-deploy osd prepare --fs-type btrfs cibosd04:sdc
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.28): /usr/bin/ceph-deploy osd prepare --fs-type btrfs cibosd04:sdc
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  disk                          : [('cibosd04', '/dev/sdc', None)]
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : prepare
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7faf715a0bd8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : btrfs
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7faf71576938>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks cibosd04:/dev/sdc:
[cibosd04][DEBUG ] connection detected need for sudo
[cibosd04][DEBUG ] connected to host: cibosd04
[cibosd04][DEBUG ] detect platform information from remote host
[cibosd04][DEBUG ] detect machine type
[cibosd04][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] Deploying osd to cibosd04
[cibosd04][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[cibosd04][INFO  ] Running command: sudo udevadm trigger --subsystem-match=block --action=""> [ceph_deploy.osd][DEBUG ] Preparing host cibosd04 disk /dev/sdc journal None activate False
[cibosd04][INFO  ] Running command: sudo ceph-disk -v prepare --cluster ceph --fs-type btrfs -- /dev/sdc
[cibosd04][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[cibosd04][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_btrfs
[cibosd04][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_btrfs
[cibosd04][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_btrfs
[cibosd04][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_btrfs
[cibosd04][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[cibosd04][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters
[cibosd04][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size
[cibosd04][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_type
[cibosd04][WARNIN] INFO:ceph-disk:Will colocate journal with data on /dev/sdc
[cibosd04][WARNIN] DEBUG:ceph-disk:Creating journal partition num 2 size 5120 on /dev/sdc
[cibosd04][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk --new=2:0:5120M --change-name=2:ceph journal --partition-guid=2:e4d02c3f-0fd4-4270-a33f-15191cd86f1b --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/sdc
[cibosd04][DEBUG ] Creating new GPT entries.
[cibosd04][DEBUG ] The operation has completed successfully.
[cibosd04][WARNIN] DEBUG:ceph-disk:Calling partprobe on prepared device /dev/sdc
[cibosd04][WARNIN] INFO:ceph-disk:Running command: /sbin/partprobe /dev/sdc
[cibosd04][WARNIN] INFO:ceph-disk:Running command: /sbin/udevadm settle
[cibosd04][WARNIN] DEBUG:ceph-disk:Journal is GPT partition /dev/disk/by-partuuid/e4d02c3f-0fd4-4270-a33f-15191cd86f1b
[cibosd04][WARNIN] DEBUG:ceph-disk:Journal is GPT partition /dev/disk/by-partuuid/e4d02c3f-0fd4-4270-a33f-15191cd86f1b
[cibosd04][WARNIN] DEBUG:ceph-disk:Creating osd partition on /dev/sdc
[cibosd04][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:082e0de9-1d32-4502-ba78-4649cfaa0d83 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be -- /dev/sdc
[cibosd04][DEBUG ] The operation has completed successfully.
[cibosd04][WARNIN] DEBUG:ceph-disk:Calling partprobe on created device /dev/sdc
[cibosd04][WARNIN] INFO:ceph-disk:Running command: /sbin/partprobe /dev/sdc
[cibosd04][WARNIN] INFO:ceph-disk:Running command: /sbin/udevadm settle
[cibosd04][WARNIN] DEBUG:ceph-disk:Creating btrfs fs on /dev/sdc1
[cibosd04][WARNIN] INFO:ceph-disk:Running command: /sbin/mkfs -t btrfs -m single -l 32768 -n 32768 -- /dev/sdc1
[cibosd04][DEBUG ] SMALL VOLUME: forcing mixed metadata/data groups
[cibosd04][WARNIN] Error: mixed metadata/data block groups require metadata blocksizes equal to the sectorsize
[cibosd04][WARNIN] ceph-disk: Error: Command '['/sbin/mkfs', '-t', 'btrfs', '-m', 'single', '-l', '32768', '-n', '32768', '--', '/dev/sdc1']' returned non-zero exit status 1
[cibosd04][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk -v prepare --cluster ceph --fs-type btrfs -- /dev/sdc
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs

Already try to do a gdisk /dev/sdc and delete the partitions, and run again the command, but with the same error

 

Thanks in advance,

German

Please visit our new website at www.pml.ac.uk and follow us on Twitter  @PlymouthMarine

Winner of the Environment & Conservation category, the Charity Awards 2014.

Plymouth Marine Laboratory (PML) is a company limited by guarantee registered in England & Wales, company number 4178503. Registered Charity No. 1091222. Registered Office: Prospect Place, The Hoe, Plymouth  PL1 3DH, UK. 

This message is private and confidential. If you have received this message in error, please notify the sender and remove it from your system. You are reminded that e-mail communications are not secure and may contain viruses; PML accepts no liability for any loss or damage which may be caused by viruses.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux