Re: ceph-deploy prepare journal on software raid ( md device )

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi to myself =)

just in case other's run into the same:

#1: You will have to update parted from version 3.1 to 3.2 ( for example
simply take the fedora package, its newer, and replace with it ) -which
is responsible for partprobe.

#2: Softwareraid will still not work, because of the guid of the
partition. ceph-deploy will recognize it as something different than
expected.

So ceph-deploy + software raid will not work.

Maybe it will work with a manual osd creation, i did not test it.

In any case: updating the parted package to make partprobe less
complaining is a very good idea if you work with any kind of raid devices.

-- 
Mit freundlichen Gruessen / Best regards

Oliver Dzombic
IP-Interactive

mailto:info@xxxxxxxxxxxxxxxxx

Anschrift:

IP Interactive UG ( haftungsbeschraenkt )
Zum Sonnenberg 1-3
63571 Gelnhausen

HRB 93402 beim Amtsgericht Hanau
Geschäftsführung: Oliver Dzombic

Steuer Nr.: 35 236 3622 1
UST ID: DE274086107


Am 08.06.2016 um 19:55 schrieb Oliver Dzombic:
> Hi,
> 
> i red, that ceph-deploy does not support software raid devices
> 
> http://tracker.ceph.com/issues/13084
> 
> But thats already nearly 1 year ago, and the problem is different.
> 
> As it seems to me, the "only" major problem is, that the newly created
> journal partition remains in the "Device or ressource busy" state. So
> that ceph-deploy gives up after some time.
> 
> Does anyone knows a workaround ?
> 
> 
> [root@cephmon1 ceph-cluster-gen2]# ceph-deploy osd prepare
> cephosd1:/dev/sdf:/dev/md128
> [ceph_deploy.conf][DEBUG ] found configuration file at:
> /root/.cephdeploy.conf
> [ceph_deploy.cli][INFO  ] Invoked (1.5.33): /usr/bin/ceph-deploy osd
> prepare cephosd1:/dev/sdf:/dev/md128
> [ceph_deploy.cli][INFO  ] ceph-deploy options:
> [ceph_deploy.cli][INFO  ]  username                      : None
> [ceph_deploy.cli][INFO  ]  disk                          : [('cephosd1',
> '/dev/sdf', '/dev/md128')]
> [ceph_deploy.cli][INFO  ]  dmcrypt                       : False
> [ceph_deploy.cli][INFO  ]  verbose                       : False
> [ceph_deploy.cli][INFO  ]  bluestore                     : None
> [ceph_deploy.cli][INFO  ]  overwrite_conf                : False
> [ceph_deploy.cli][INFO  ]  subcommand                    : prepare
> [ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               :
> /etc/ceph/dmcrypt-keys
> [ceph_deploy.cli][INFO  ]  quiet                         : False
> [ceph_deploy.cli][INFO  ]  cd_conf                       :
> <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f57ac006518>
> [ceph_deploy.cli][INFO  ]  cluster                       : ceph
> [ceph_deploy.cli][INFO  ]  fs_type                       : xfs
> [ceph_deploy.cli][INFO  ]  func                          : <function osd
> at 0x7f57abff9c08>
> [ceph_deploy.cli][INFO  ]  ceph_conf                     : None
> [ceph_deploy.cli][INFO  ]  default_release               : False
> [ceph_deploy.cli][INFO  ]  zap_disk                      : False
> [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks
> cephosd1:/dev/sdf:/dev/md128
> [cephosd1][DEBUG ] connected to host: cephosd1
> [cephosd1][DEBUG ] detect platform information from remote host
> [cephosd1][DEBUG ] detect machine type
> [cephosd1][DEBUG ] find the location of an executable
> [ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.2.1511 Core
> [ceph_deploy.osd][DEBUG ] Deploying osd to cephosd1
> [cephosd1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
> [cephosd1][WARNIN] osd keyring does not exist yet, creating one
> [cephosd1][DEBUG ] create a keyring file
> [ceph_deploy.osd][DEBUG ] Preparing host cephosd1 disk /dev/sdf journal
> /dev/md128 activate False
> [cephosd1][DEBUG ] find the location of an executable
> [cephosd1][INFO  ] Running command: /usr/sbin/ceph-disk -v prepare
> --cluster ceph --fs-type xfs -- /dev/sdf /dev/md128
> [cephosd1][WARNIN] command: Running command: /usr/bin/ceph-osd
> --cluster=ceph --show-config-value=fsid
> [cephosd1][WARNIN] command: Running command: /usr/bin/ceph-osd
> --check-allows-journal -i 0 --cluster ceph
> [cephosd1][WARNIN] command: Running command: /usr/bin/ceph-osd
> --check-wants-journal -i 0 --cluster ceph
> [cephosd1][WARNIN] command: Running command: /usr/bin/ceph-osd
> --check-needs-journal -i 0 --cluster ceph
> [cephosd1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdf uuid path is
> /sys/dev/block/8:80/dm/uuid
> [cephosd1][WARNIN] command: Running command: /usr/bin/ceph-osd
> --cluster=ceph --show-config-value=osd_journal_size
> [cephosd1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdf uuid path is
> /sys/dev/block/8:80/dm/uuid
> [cephosd1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdf uuid path is
> /sys/dev/block/8:80/dm/uuid
> [cephosd1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdf uuid path is
> /sys/dev/block/8:80/dm/uuid
> [cephosd1][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdf1 uuid path is
> /sys/dev/block/8:81/dm/uuid
> [cephosd1][WARNIN] command: Running command: /usr/bin/ceph-conf
> --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
> [cephosd1][WARNIN] command: Running command: /usr/bin/ceph-conf
> --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
> [cephosd1][WARNIN] get_dm_uuid: get_dm_uuid /dev/md128 uuid path is
> /sys/dev/block/9:128/dm/uuid
> [cephosd1][WARNIN] prepare_device: OSD will not be hot-swappable if
> journal is not the same device as the osd data
> [cephosd1][WARNIN] get_dm_uuid: get_dm_uuid /dev/md128 uuid path is
> /sys/dev/block/9:128/dm/uuid
> [cephosd1][WARNIN] ptype_tobe_for_name: name = journal
> [cephosd1][WARNIN] get_dm_uuid: get_dm_uuid /dev/md128 uuid path is
> /sys/dev/block/9:128/dm/uuid
> [cephosd1][WARNIN] command: Running command: /usr/sbin/parted --machine
> -- /dev/md128 print
> BYT;
>  lyzing
> [cephosd1][WARNIN] /dev/md128:240GB:md:512:512:unknown:Linux Software
> RAID Array:;
> [cephosd1][WARNIN]
> [cephosd1][WARNIN] create_partition: Creating journal partition num 1
> size 20000 on /dev/md128
> [cephosd1][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk
> --new=1:0:+20000M --change-name=1:ceph journal
> --partition-guid=1:449fc1e0-ae4b-40ea-b214-02659682d0bd
> --typecode=1:45b0969e-9b03-4f30-b4c6-b4b80ceff106 --mbrtogpt -- /dev/md128
> [cephosd1][DEBUG ] Creating new GPT entries.
> [cephosd1][DEBUG ] The operation has completed successfully.
> [cephosd1][WARNIN] update_partition: Calling partprobe on created device
> /dev/md128
> [cephosd1][WARNIN] command_check_call: Running command: /usr/bin/udevadm
> settle --timeout=600
> [cephosd1][WARNIN] calling: settle
> 
> 
> 
> [cephosd1][WARNIN] command: Running command: /usr/sbin/partprobe /dev/md128
> [cephosd1][WARNIN] update_partition: partprobe /dev/md128 failed :
> Error: Error informing the kernel about modifications to partition
> /dev/md128p1 -- Device or resource busy.  This means Linux won't know
> about any changes you made to /dev/md128p1 until you reboot -- so you
> shouldn't mount it or use it in any way before rebooting.
> [cephosd1][WARNIN] Error: Failed to add partition 1 (Device or resource
> busy)
> [cephosd1][WARNIN]  (ignored, waiting 60s)
> [cephosd1][WARNIN] command_check_call: Running command: /usr/bin/udevadm
> settle --timeout=600
> [cephosd1][WARNIN] calling: settle
> 
> 
> 
> [cephosd1][WARNIN] command: Running command: /usr/sbin/partprobe /dev/md128
> [cephosd1][WARNIN] update_partition: partprobe /dev/md128 failed :
> Error: Error informing the kernel about modifications to partition
> /dev/md128p1 -- Device or resource busy.  This means Linux won't know
> about any changes you made to /dev/md128p1 until you reboot -- so you
> shouldn't mount it or use it in any way before rebooting.
> [cephosd1][WARNIN] Error: Failed to add partition 1 (Device or resource
> busy)
> [cephosd1][WARNIN]  (ignored, waiting 60s)
> [cephosd1][WARNIN] command_check_call: Running command: /usr/bin/udevadm
> settle --timeout=600
> [cephosd1][WARNIN] calling: settle
> 
> 
> [cephosd1][WARNIN] command: Running command: /usr/sbin/partprobe /dev/md128
> [cephosd1][WARNIN] update_partition: partprobe /dev/md128 failed :
> Error: Error informing the kernel about modifications to partition
> /dev/md128p1 -- Device or resource busy.  This means Linux won't know
> about any changes you made to /dev/md128p1 until you reboot -- so you
> shouldn't mount it or use it in any way before rebooting.
> [cephosd1][WARNIN] Error: Failed to add partition 1 (Device or resource
> busy)
> [cephosd1][WARNIN]  (ignored, waiting 60s)
> [cephosd1][WARNIN] command_check_call: Running command: /usr/bin/udevadm
> settle --timeout=600
> [cephosd1][WARNIN] calling: settle
> 
> 
> [cephosd1][WARNIN] command: Running command: /usr/sbin/partprobe /dev/md128
> [cephosd1][WARNIN] update_partition: partprobe /dev/md128 failed :
> Error: Error informing the kernel about modifications to partition
> /dev/md128p1 -- Device or resource busy.  This means Linux won't know
> about any changes you made to /dev/md128p1 until you reboot -- so you
> shouldn't mount it or use it in any way before rebooting.
> [cephosd1][WARNIN] Error: Failed to add partition 1 (Device or resource
> busy)
> [cephosd1][WARNIN]  (ignored, waiting 60s)
> [cephosd1][WARNIN] command_check_call: Running command: /usr/bin/udevadm
> settle --timeout=600
> [cephosd1][WARNIN] calling: settle
> [cephosd1][WARNIN] command: Running command: /usr/sbin/partprobe /dev/md128
> [cephosd1][WARNIN] update_partition: partprobe /dev/md128 failed :
> Error: Error informing the kernel about modifications to partition
> /dev/md128p1 -- Device or resource busy.  This means Linux won't know
> about any changes you made to /dev/md128p1 until you reboot -- so you
> shouldn't mount it or use it in any way before rebooting.
> [cephosd1][WARNIN] Error: Failed to add partition 1 (Device or resource
> busy)
> [cephosd1][WARNIN]  (ignored, waiting 60s)
> [cephosd1][WARNIN] Traceback (most recent call last):
> [cephosd1][WARNIN]   File "/usr/sbin/ceph-disk", line 9, in <module>
> [cephosd1][WARNIN]     load_entry_point('ceph-disk==1.0.0',
> 'console_scripts', 'ceph-disk')()
> [cephosd1][WARNIN]   File
> "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4964, in run
> [cephosd1][WARNIN]     main(sys.argv[1:])
> [cephosd1][WARNIN]   File
> "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 4915, in main
> [cephosd1][WARNIN]     args.func(args)
> [cephosd1][WARNIN]   File
> "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 1774, in main
> [cephosd1][WARNIN]     Prepare.factory(args).prepare()
> [cephosd1][WARNIN]   File
> "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 1762, in prepare
> [cephosd1][WARNIN]     self.prepare_locked()
> [cephosd1][WARNIN]   File
> "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 1794, in
> prepare_locked
> [cephosd1][WARNIN]     self.data.prepare(self.journal)
> [cephosd1][WARNIN]   File
> "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2446, in prepare
> [cephosd1][WARNIN]     self.prepare_device(*to_prepare_list)
> [cephosd1][WARNIN]   File
> "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2622, in
> prepare_device
> [cephosd1][WARNIN]     to_prepare.prepare()
> [cephosd1][WARNIN]   File
> "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 1964, in prepare
> [cephosd1][WARNIN]     self.prepare_device()
> [cephosd1][WARNIN]   File
> "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 2054, in
> prepare_device
> [cephosd1][WARNIN]     num=num)
> [cephosd1][WARNIN]   File
> "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 1525, in
> create_partition
> [cephosd1][WARNIN]     update_partition(self.path, 'created')
> [cephosd1][WARNIN]   File
> "/usr/lib/python2.7/site-packages/ceph_disk/main.py", line 1413, in
> update_partition
> [cephosd1][WARNIN]     raise Error('partprobe %s failed : %s' % (dev,
> error))
> [cephosd1][WARNIN] ceph_disk.main.Error: Error: partprobe /dev/md128
> failed : Error: Error informing the kernel about modifications to
> partition /dev/md128p1 -- Device or resource busy.  This means Linux
> won't know about any changes you made to /dev/md128p1 until you reboot
> -- so you shouldn't mount it or use it in any way before rebooting.
> [cephosd1][WARNIN] Error: Failed to add partition 1 (Device or resource
> busy)
> [cephosd1][WARNIN]
> [cephosd1][ERROR ] RuntimeError: command returned non-zero exit status: 1
> [ceph_deploy.osd][ERROR ] Failed to execute command: /usr/sbin/ceph-disk
> -v prepare --cluster ceph --fs-type xfs -- /dev/sdf /dev/md128
> [ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs
> 
> 
> 
> Thank you !
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux