Failing to Activate new OSD ceph-deploy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey,
This is the first time I have had a problem with ceph-deploy

I have attached the log but I can't seem to activate the osd.

I am running
ceph version 10.2.0 (3a9fba20ec743699b69bd0181dd6c54dc01c64b9)

I did upgrade from Infernalis->Jewel
I haven't changed ceph ownership but I do have the config option
setuser_match_path = /var/lib/ceph/$type/$cluster-$id

Any help would be appreciated,
Scott
Stat200:~/t-cluster$ ceph-deploy --overwrite-conf osd create tCeph203:/dev/sdl:/dev/sdc4
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/t/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.34): /usr/bin/ceph-deploy --overwrite-conf osd create tCeph203:/dev/sdl:/dev/sdc4
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  disk                          : [('tCeph203', '/dev/sdl', '/dev/sdc4')]
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  overwrite_conf                : True
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f61acc094d0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7f61acbd8848>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks tCeph203:/dev/sdl:/dev/sdc4
t@tceph203's password: 
[tCeph203][DEBUG ] connection detected need for sudo
t@tceph203's password: 
[tCeph203][DEBUG ] connected to host: tCeph203 
[tCeph203][DEBUG ] detect platform information from remote host
[tCeph203][DEBUG ] detect machine type
[tCeph203][DEBUG ] find the location of an executable
[tCeph203][INFO  ] Running command: sudo /sbin/initctl version
[tCeph203][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] Deploying osd to tCeph203
[tCeph203][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.osd][DEBUG ] Preparing host tCeph203 disk /dev/sdl journal /dev/sdc4 activate True
[tCeph203][DEBUG ] find the location of an executable
[tCeph203][INFO  ] Running command: sudo /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/sdl /dev/sdc4
[tCeph203][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[tCeph203][WARNIN] command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --cluster ceph
[tCeph203][WARNIN] command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --cluster ceph
[tCeph203][WARNIN] command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --cluster ceph
[tCeph203][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdl uuid path is /sys/dev/block/8:176/dm/uuid
[tCeph203][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[tCeph203][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdl uuid path is /sys/dev/block/8:176/dm/uuid
[tCeph203][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdl uuid path is /sys/dev/block/8:176/dm/uuid
[tCeph203][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdl uuid path is /sys/dev/block/8:176/dm/uuid
[tCeph203][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[tCeph203][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[tCeph203][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[tCeph203][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[tCeph203][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdc4 uuid path is /sys/dev/block/8:36/dm/uuid
[tCeph203][WARNIN] prepare_device: Journal /dev/sdc4 is a partition
[tCeph203][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdc4 uuid path is /sys/dev/block/8:36/dm/uuid
[tCeph203][WARNIN] prepare_device: OSD will not be hot-swappable if journal is not the same device as the osd data
[tCeph203][WARNIN] command: Running command: /sbin/blkid -o udev -p /dev/sdc4
[tCeph203][WARNIN] prepare_device: Journal /dev/sdc4 was not prepared with ceph-disk. Symlinking directly.
[tCeph203][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdl uuid path is /sys/dev/block/8:176/dm/uuid
[tCeph203][WARNIN] set_data_partition: Creating osd partition on /dev/sdl
[tCeph203][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdl uuid path is /sys/dev/block/8:176/dm/uuid
[tCeph203][WARNIN] ptype_tobe_for_name: name = data
[tCeph203][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdl uuid path is /sys/dev/block/8:176/dm/uuid
[tCeph203][WARNIN] create_partition: Creating data partition num 1 size 0 on /dev/sdl
[tCeph203][WARNIN] command_check_call: Running command: /sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:4bb7cf62-35a1-43af-806e-6bfdff0eea40 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be --mbrtogpt -- /dev/sdl
[tCeph203][DEBUG ] The operation has completed successfully.
[tCeph203][WARNIN] update_partition: Calling partprobe on created device /dev/sdl
[tCeph203][WARNIN] command_check_call: Running command: /sbin/udevadm settle --timeout=600
[tCeph203][WARNIN] command: Running command: /sbin/partprobe /dev/sdl
[tCeph203][WARNIN] command_check_call: Running command: /sbin/udevadm settle --timeout=600
[tCeph203][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdl uuid path is /sys/dev/block/8:176/dm/uuid
[tCeph203][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdl uuid path is /sys/dev/block/8:176/dm/uuid
[tCeph203][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdl1 uuid path is /sys/dev/block/8:177/dm/uuid
[tCeph203][WARNIN] populate_data_path_device: Creating xfs fs on /dev/sdl1
[tCeph203][WARNIN] command_check_call: Running command: /sbin/mkfs -t xfs -f -i size=2048 -- /dev/sdl1
[tCeph203][DEBUG ] meta-data=/dev/sdl1              isize=2048   agcount=32, agsize=61047075 blks
[tCeph203][DEBUG ]          =                       sectsz=4096  attr=2, projid32bit=0
[tCeph203][DEBUG ] data     =                       bsize=4096   blocks=1953506385, imaxpct=5
[tCeph203][DEBUG ]          =                       sunit=0      swidth=0 blks
[tCeph203][DEBUG ] naming   =version 2              bsize=4096   ascii-ci=0
[tCeph203][DEBUG ] log      =internal log           bsize=4096   blocks=521728, version=2
[tCeph203][DEBUG ]          =                       sectsz=4096  sunit=1 blks, lazy-count=1
[tCeph203][DEBUG ] realtime =none                   extsz=4096   blocks=0, rtextents=0
[tCeph203][WARNIN] mount: Mounting /dev/sdl1 on /var/lib/ceph/tmp/mnt.5ajmfk with options noatime,inode64
[tCeph203][WARNIN] command_check_call: Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/sdl1 /var/lib/ceph/tmp/mnt.5ajmfk
[tCeph203][WARNIN] populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.5ajmfk
[tCeph203][WARNIN] command: Running command: /bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.5ajmfk/ceph_fsid.7489.tmp
[tCeph203][WARNIN] command: Running command: /bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.5ajmfk/fsid.7489.tmp
[tCeph203][WARNIN] command: Running command: /bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.5ajmfk/magic.7489.tmp
[tCeph203][WARNIN] command: Running command: /bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.5ajmfk/journal_uuid.7489.tmp
[tCeph203][WARNIN] adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.5ajmfk/journal -> /dev/sdc4
[tCeph203][WARNIN] command: Running command: /bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.5ajmfk
[tCeph203][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.5ajmfk
[tCeph203][WARNIN] command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.5ajmfk
[tCeph203][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdl uuid path is /sys/dev/block/8:176/dm/uuid
[tCeph203][WARNIN] command_check_call: Running command: /sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdl
[tCeph203][DEBUG ] The operation has completed successfully.
[tCeph203][WARNIN] update_partition: Calling partprobe on prepared device /dev/sdl
[tCeph203][WARNIN] command_check_call: Running command: /sbin/udevadm settle --timeout=600
[tCeph203][WARNIN] command: Running command: /sbin/partprobe /dev/sdl
[tCeph203][WARNIN] command_check_call: Running command: /sbin/udevadm settle --timeout=600
[tCeph203][WARNIN] command_check_call: Running command: /sbin/udevadm trigger --action=add --sysname-match sdl1
[tCeph203][INFO  ] checking OSD status...
[tCeph203][DEBUG ] find the location of an executable
[tCeph203][INFO  ] Running command: sudo /usr/bin/ceph --cluster=ceph osd stat --format=json
[tCeph203][WARNIN] there is 1 OSD down
[ceph_deploy.osd][DEBUG ] Host tCeph203 is now ready for osd use.
t@tStat200:~/t-cluster$ ceph-deploy osd activate tCeph203:/dev/sdl1:/dev/sdc4
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/t/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.34): /usr/bin/ceph-deploy osd activate tCeph203:/dev/sdl1:/dev/sdc4
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : activate
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f6af8c2f4d0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7f6af8bfe848>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  disk                          : [('tCeph203', '/dev/sdl1', '/dev/sdc4')]
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks tCeph203:/dev/sdl1:/dev/sdc4
t@tceph203's password: 
[tCeph203][DEBUG ] connection detected need for sudo
t@tceph203's password: 
[tCeph203][DEBUG ] connected to host: tCeph203 
[tCeph203][DEBUG ] detect platform information from remote host
[tCeph203][DEBUG ] detect machine type
[tCeph203][DEBUG ] find the location of an executable
[tCeph203][INFO  ] Running command: sudo /sbin/initctl version
[tCeph203][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] activating host tCeph203 disk /dev/sdl1
[ceph_deploy.osd][DEBUG ] will use init type: upstart
[tCeph203][DEBUG ] find the location of an executable
[tCeph203][INFO  ] Running command: sudo /usr/sbin/ceph-disk -v activate --mark-init upstart --mount /dev/sdl1
[tCeph203][WARNIN] main_activate: path = /dev/sdl1
[tCeph203][WARNIN] get_dm_uuid: get_dm_uuid /dev/sdl1 uuid path is /sys/dev/block/8:177/dm/uuid
[tCeph203][WARNIN] command: Running command: /sbin/blkid -o udev -p /dev/sdl1
[tCeph203][WARNIN] command: Running command: /sbin/blkid -p -s TYPE -o value -- /dev/sdl1
[tCeph203][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[tCeph203][WARNIN] command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[tCeph203][WARNIN] mount: Mounting /dev/sdl1 on /var/lib/ceph/tmp/mnt.m4bkWP with options noatime,inode64
[tCeph203][WARNIN] command_check_call: Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/sdl1 /var/lib/ceph/tmp/mnt.m4bkWP
[tCeph203][WARNIN] activate: Cluster uuid is b6a0b5b6-ac22-4669-9d59-85c673640c18
[tCeph203][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[tCeph203][WARNIN] activate: Cluster name is ceph
[tCeph203][WARNIN] activate: OSD uuid is 4bb7cf62-35a1-43af-806e-6bfdff0eea40
[tCeph203][WARNIN] activate: OSD id is 22
[tCeph203][WARNIN] activate: Initializing OSD...
[tCeph203][WARNIN] command_check_call: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/tmp/mnt.m4bkWP/activate.monmap
[tCeph203][WARNIN] got monmap epoch 7
[tCeph203][WARNIN] command_check_call: Running command: /usr/bin/ceph-osd --cluster ceph --mkfs --mkkey -i 22 --monmap /var/lib/ceph/tmp/mnt.m4bkWP/activate.monmap --osd-data /var/lib/ceph/tmp/mnt.m4bkWP --osd-journal /var/lib/ceph/tmp/mnt.m4bkWP/journal --osd-uuid 4bb7cf62-35a1-43af-806e-6bfdff0eea40 --keyring /var/lib/ceph/tmp/mnt.m4bkWP/keyring --setuser ceph --setgroup ceph
[tCeph203][WARNIN] unable to stat setuser_match_path /var/lib/ceph/$type/$cluster-$id: (2) No such file or directory
[tCeph203][WARNIN] mount_activate: Failed to activate
[tCeph203][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.m4bkWP
[tCeph203][WARNIN] command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.m4bkWP
[tCeph203][WARNIN] Traceback (most recent call last):
[tCeph203][WARNIN]   File "/usr/sbin/ceph-disk", line 9, in <module>
[tCeph203][WARNIN]     load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
[tCeph203][WARNIN]   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 4964, in run
[tCeph203][WARNIN]     main(sys.argv[1:])
[tCeph203][WARNIN]   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 4915, in main
[tCeph203][WARNIN]     args.func(args)
[tCeph203][WARNIN]   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3269, in main_activate
[tCeph203][WARNIN]     reactivate=args.reactivate,
[tCeph203][WARNIN]   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3026, in mount_activate
[tCeph203][WARNIN]     (osd_id, cluster) = activate(path, activate_key_template, init)
[tCeph203][WARNIN]   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3202, in activate
[tCeph203][WARNIN]     keyring=keyring,
[tCeph203][WARNIN]   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 2695, in mkfs
[tCeph203][WARNIN]     '--setgroup', get_ceph_group(),
[tCeph203][WARNIN]   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 439, in command_check_call
[tCeph203][WARNIN]     return subprocess.check_call(arguments)
[tCeph203][WARNIN]   File "/usr/lib/python2.7/subprocess.py", line 540, in check_call
[tCeph203][WARNIN]     raise CalledProcessError(retcode, cmd)
[tCeph203][WARNIN] subprocess.CalledProcessError: Command '['/usr/bin/ceph-osd', '--cluster', 'ceph', '--mkfs', '--mkkey', '-i', '22', '--monmap', '/var/lib/ceph/tmp/mnt.m4bkWP/activate.monmap', '--osd-data', '/var/lib/ceph/tmp/mnt.m4bkWP', '--osd-journal', '/var/lib/ceph/tmp/mnt.m4bkWP/journal', '--osd-uuid', '4bb7cf62-35a1-43af-806e-6bfdff0eea40', '--keyring', '/var/lib/ceph/tmp/mnt.m4bkWP/keyring', '--setuser', 'ceph', '--setgroup', 'ceph']' returned non-zero exit status 1
[tCeph203][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: /usr/sbin/ceph-disk -v activate --mark-init upstart --mount /dev/sdl1

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux