Re: Broken Ceph Cluster when adding new one - Proxmox 5.0 & Ceph Luminous

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Le 15/07/2017 à 23:09, Udo Lembke a écrit :
Hi,

On 15.07.2017 16:01, Phil Schwarz wrote:
Hi,
...

While investigating, i wondered about my config :
Question relative to /etc/hosts file :
Should i use private_replication_LAN Ip or public ones ?
private_replication_LAN!! And the pve-cluster should use another network
(nics) if possible.

Udo

OK, thanks Udo.

After investigation, i did :
- set Noout OSDs
- Stopped CPU-pegging LXC
- Check the cabling
- Restart the whole cluster

Everything went fine !

But, when i tried to add a new OSD :

fdisk /dev/sdc --> Deleted the partition table
parted /dev/sdc --> mklabel msdos (Disk came from a ZFS FreeBSD system)
dd if=/dev/null of=/dev/sdc
ceph-disk zap /dev/sdc
dd if=/dev/zero  of=/dev/sdc bs=10M count=1000

And recreated the OSD via Web GUI.
Same result, the OSD is known by the node, but not by the cluster.

Logs seem to show an issue with this bluestore OSD, have a look at the file.

I'm gonna give a try to OSD recreating using Filestore.

Thanks

pvedaemon[3077]: <root@pam> starting task UPID:varys:00007E7D:0004F489:596B5FCE:cephcreateosd:sdc:root@pam:
kernel: [ 3267.263313]  sdc:
systemd[1]: Created slice system-ceph\x2ddisk.slice.
systemd[1]: Starting Ceph disk activation: /dev/sdc2...
sh[1074]: main_trigger: main_trigger: Namespace(cluster='ceph', dev='/dev/sdc2', dmcrypt=None, dmcrypt_key_dir='/etc/ceph/dmcrypt-keys', func=<function main_trigger at 0x7fa15c8dd938>, log_stdout=True, prepend_to_path='/usr/bin', prog='ceph-disk', setgroup=None, setuser=None, statedir='/var/lib/ceph', sync=True,
sh[1074]: command: Running command: /sbin/init --version
sh[1074]: command_check_call: Running command: /bin/chown ceph:ceph /dev/sdc2
sh[1074]: command: Running command: /sbin/blkid -o udev -p /dev/sdc2
sh[1074]: command: Running command: /sbin/blkid -o udev -p /dev/sdc2
sh[1074]: main_trigger: trigger /dev/sdc2 parttype cafecafe-9b03-4f30-b4c6-b4b80ceff106 uuid 7a6d7546-b93a-452b-9bbc-f660f9a8416c
sh[1074]: command: Running command: /usr/sbin/ceph-disk --verbose activate-block /dev/sdc2
systemd[1]: Stopped Ceph disk activation: /dev/sdc2.
systemd[1]: Starting Ceph disk activation: /dev/sdc2...
sh[1074]: main_trigger:
sh[1074]: main_trigger: get_dm_uuid: get_dm_uuid /dev/sdc2 uuid path is /sys/dev/block/8:34/dm/uuid
sh[1074]: command: Running command: /sbin/blkid -o udev -p /dev/sdc2
sh[1074]: command: Running command: /usr/bin/ceph-osd --get-device-fsid /dev/sdc2
sh[1074]: get_space_osd_uuid: Block /dev/sdc2 has OSD UUID 00000000-0000-0000-0000-000000000000
sh[1074]: main_activate_space: activate: OSD device not present, not starting, yet
systemd[1]: Stopped Ceph disk activation: /dev/sdc2.
systemd[1]: Starting Ceph disk activation: /dev/sdc2...
sh[1475]: main_trigger: main_trigger: Namespace(cluster='ceph', dev='/dev/sdc2', dmcrypt=None, dmcrypt_key_dir='/etc/ceph/dmcrypt-keys', func=<function main_trigger at 0x7f56c3236938>, log_stdout=True, prepend_to_path='/usr/bin', prog='ceph-disk', setgroup=None, setuser=None, statedir='/var/lib/ceph', sync=True,
sh[1475]: command: Running command: /sbin/init --version
sh[1475]: command_check_call: Running command: /bin/chown ceph:ceph /dev/sdc2
sh[1475]: command: Running command: /sbin/blkid -o udev -p /dev/sdc2
sh[1475]: command: Running command: /sbin/blkid -o udev -p /dev/sdc2
sh[1475]: main_trigger: trigger /dev/sdc2 parttype cafecafe-9b03-4f30-b4c6-b4b80ceff664 uuid 7a6d7546-b93a-452b-9bbc-f660f9a84664
sh[1475]: command: Running command: /usr/sbin/ceph-disk --verbose activate-block /dev/sdc2
kernel: [ 3291.171474]  sdc: sdc1 sdc2
sh[1475]: main_trigger:
sh[1475]: main_trigger: get_dm_uuid: get_dm_uuid /dev/sdc2 uuid path is /sys/dev/block/8:34/dm/uuid
sh[1475]: command: Running command: /sbin/blkid -o udev -p /dev/sdc2
sh[1475]: command: Running command: /usr/bin/ceph-osd --get-device-fsid /dev/sdc2
sh[1475]: get_space_osd_uuid: Block /dev/sdc2 has OSD UUID 00000000-0000-0000-0000-000000000000
sh[1475]: main_activate_space: activate: OSD device not present, not starting, yet
systemd[1]: Stopped Ceph disk activation: /dev/sdc2.
systemd[1]: Starting Ceph disk activation: /dev/sdc2...
sh[1492]: main_trigger: main_trigger: Namespace(cluster='ceph', dev='/dev/sdc2', dmcrypt=None, dmcrypt_key_dir='/etc/ceph/dmcrypt-keys', func=<function main_trigger at 0x7faf79c66938>, log_stdout=True, prepend_to_path='/usr/bin', prog='ceph-disk', setgroup=None, setuser=None, statedir='/var/lib/ceph', sync=True,
sh[1492]: command: Running command: /sbin/init --version
sh[1492]: command_check_call: Running command: /bin/chown ceph:ceph /dev/sdc2
sh[1492]: command: Running command: /sbin/blkid -o udev -p /dev/sdc2
sh[1492]: command: Running command: /sbin/blkid -o udev -p /dev/sdc2
sh[1492]: main_trigger: trigger /dev/sdc2 parttype cafecafe-9b03-4f30-b4c6-b4b80ceff664 uuid 7a6d7546-b93a-452b-9bbc-f660f9a84664
sh[1492]: command: Running command: /usr/sbin/ceph-disk --verbose activate-block /dev/sdc2
sh[1492]: main_trigger:
sh[1492]: main_trigger: get_dm_uuid: get_dm_uuid /dev/sdc2 uuid path is /sys/dev/block/8:34/dm/uuid
sh[1492]: command: Running command: /sbin/blkid -o udev -p /dev/sdc2
sh[1492]: command: Running command: /usr/bin/ceph-osd --get-device-fsid /dev/sdc2
sh[1492]: get_space_osd_uuid: Block /dev/sdc2 has OSD UUID 00000000-0000-0000-0000-000000000000
sh[1492]: main_activate_space: activate: OSD device not present, not starting, yet
kernel: [ 3294.468494] SGI XFS with ACLs, security attributes, realtime, no debug enabled
kernel: [ 3294.517598] XFS (sdc1): Mounting V5 Filesystem
kernel: [ 3294.724195] XFS (sdc1): Unmounting Filesystem
sh[1539]: main_trigger: main_trigger: Namespace(cluster='ceph', dev='/dev/sdc2', dmcrypt=None, dmcrypt_key_dir='/etc/ceph/dmcrypt-keys', func=<function main_trigger at 0x7fa691766938>, log_stdout=True, prepend_to_path='/usr/bin', prog='ceph-disk', setgroup=None, setuser=None, statedir='/var/lib/ceph', sync=True,
sh[1539]: command: Running command: /sbin/init --version
kernel: [ 3294.773692]  sdc: sdc1 sdc2
sh[1539]: command_check_call: Running command: /bin/chown ceph:ceph /dev/sdc2
sh[1539]: command: Running command: /sbin/blkid -o udev -p /dev/sdc2
sh[1539]: command: Running command: /sbin/blkid -o udev -p /dev/sdc2
sh[1539]: main_trigger: trigger /dev/sdc2 parttype cafecafe-9b03-4f30-b4c6-b4b80ceff664 uuid 7a6d7546-b93a-452b-9bbc-f660f9a84664
sh[1539]: command: Running command: /usr/sbin/ceph-disk --verbose activate-block /dev/sdc2
sh[1539]: main_trigger:
sh[1539]: main_trigger: get_dm_uuid: get_dm_uuid /dev/sdc2 uuid path is /sys/dev/block/8:34/dm/uuid
sh[1539]: command: Running command: /sbin/blkid -o udev -p /dev/sdc2
sh[1539]: command: Running command: /usr/bin/ceph-osd --get-device-fsid /dev/sdc2
sh[1539]: get_space_osd_uuid: Block /dev/sdc2 has OSD UUID 00000000-0000-0000-0000-000000000000
sh[1539]: main_activate_space: activate: OSD device not present, not starting, yet
systemd[1]: Started Ceph disk activation: /dev/sdc2.
systemd[1]: Starting Ceph disk activation: /dev/sdc1...
systemd[1]: Starting Ceph disk activation: /dev/sdc2...
sh[1694]: main_trigger: main_trigger: Namespace(cluster='ceph', dev='/dev/sdc2', dmcrypt=None, dmcrypt_key_dir='/etc/ceph/dmcrypt-keys', func=<function main_trigger at 0x7fece4d05938>, log_stdout=True, prepend_to_path='/usr/bin', prog='ceph-disk', setgroup=None, setuser=None, statedir='/var/lib/ceph', sync=True,
sh[1694]: command: Running command: /sbin/init --version
sh[1694]: command_check_call: Running command: /bin/chown ceph:ceph /dev/sdc2
sh[1694]: command: Running command: /sbin/blkid -o udev -p /dev/sdc2
sh[1694]: command: Running command: /sbin/blkid -o udev -p /dev/sdc2
sh[1694]: main_trigger: trigger /dev/sdc2 parttype cafecafe-9b03-4f30-b4c6-b4b80ceff664 uuid 7a6d7546-b93a-452b-9bbc-f660f9a84664
sh[1694]: command: Running command: /usr/sbin/ceph-disk --verbose activate-block /dev/sdc2
sh[1689]: main_trigger: main_trigger: Namespace(cluster='ceph', dev='/dev/sdc1', dmcrypt=None, dmcrypt_key_dir='/etc/ceph/dmcrypt-keys', func=<function main_trigger at 0x7f3c2c548938>, log_stdout=True, prepend_to_path='/usr/bin', prog='ceph-disk', setgroup=None, setuser=None, statedir='/var/lib/ceph', sync=True,
sh[1689]: command: Running command: /sbin/init --version
sh[1689]: command_check_call: Running command: /bin/chown ceph:ceph /dev/sdc1
sh[1689]: command: Running command: /sbin/blkid -o udev -p /dev/sdc1
sh[1689]: command: Running command: /sbin/blkid -o udev -p /dev/sdc1
sh[1689]: main_trigger: trigger /dev/sdc1 parttype 4fbd7e29-9d25-41b8-afd0-062c0ceff664 uuid e661b3e9-eb69-4bff-9db2-7c9d802c9664
sh[1689]: command: Running command: /usr/sbin/ceph-disk --verbose activate /dev/sdc1
sh[1694]: main_trigger:
sh[1694]: main_trigger: get_dm_uuid: get_dm_uuid /dev/sdc2 uuid path is /sys/dev/block/8:34/dm/uuid
sh[1694]: command: Running command: /sbin/blkid -o udev -p /dev/sdc2
sh[1694]: command: Running command: /usr/bin/ceph-osd --get-device-fsid /dev/sdc2
sh[1694]: get_space_osd_uuid: Block /dev/sdc2 has OSD UUID 00000000-0000-0000-0000-000000000000
sh[1694]: main_activate_space: activate: OSD device not present, not starting, yet
systemd[1]: Started Ceph disk activation: /dev/sdc2.
kernel: [ 3299.147770] XFS (sdc1): Mounting V5 Filesystem
kernel: [ 3301.974401] XFS (sdc1): Unmounting Filesystem
sh[1689]: main_trigger:
sh[1689]: main_trigger: main_activate: path = /dev/sdc1
sh[1689]: get_dm_uuid: get_dm_uuid /dev/sdc1 uuid path is /sys/dev/block/8:33/dm/uuid
sh[1689]: command: Running command: /sbin/blkid -o udev -p /dev/sdc1
sh[1689]: command: Running command: /sbin/blkid -p -s TYPE -o value -- /dev/sdc1
sh[1689]: command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
sh[1689]: command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
sh[1689]: mount: Mounting /dev/sdc1 on /var/lib/ceph/tmp/mnt.14kiXz with options noatime,inode64
sh[1689]: command_check_call: Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/sdc1 /var/lib/ceph/tmp/mnt.14kiXz
sh[1689]: activate: Cluster uuid is a5c0cfed-31c1-41db-b567-184bf966664
sh[1689]: command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
sh[1689]: activate: Cluster name is ceph
sh[1689]: activate: OSD uuid is e661b3e9-eb69-4bff-9db2-7c9d802c9664
sh[1689]: allocate_osd_id: Allocating OSD id...
sh[1689]: command: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd create --concise e661b3e9-eb69-4bff-9db2-7c9d802c9664
sh[1689]: command: Running command: /bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.14kiXz/whoami.1718.tmp
sh[1689]: activate: OSD id is 6
sh[1689]: activate: Initializing OSD...
sh[1689]: command_check_call: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/tmp/mnt.14kiXz/activate.monmap
sh[1689]: got monmap epoch 4
sh[1689]: command: Running command: /usr/bin/timeout 300 ceph-osd --cluster ceph --mkfs --mkkey -i 6 --monmap /var/lib/ceph/tmp/mnt.14kiXz/activate.monmap --osd-data /var/lib/ceph/tmp/mnt.14kiXz --osd-uuid
sh[1689]: mount_activate: Failed to activate
sh[1689]: unmount: Unmounting /var/lib/ceph/tmp/mnt.14kiXz
sh[1689]: command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.14kiXz
sh[1689]: Traceback (most recent call last):
sh[1689]:   File "/usr/sbin/ceph-disk", line 11, in <module>
sh[1689]:     load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
sh[1689]:   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5687, in run
sh[1689]:     main(sys.argv[1:])
sh[1689]:   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5638, in main
sh[1689]:     args.func(args)
sh[1689]:   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3710, in main_activate
sh[1689]:     reactivate=args.reactivate,
sh[1689]:   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3467, in mount_activate
sh[1689]:     (osd_id, cluster) = activate(path, activate_key_template, init)
sh[1689]:   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3643, in activate
sh[1689]:     keyring=keyring,
sh[1689]:   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3055, in mkfs
sh[1689]:     '--setgroup', get_ceph_group(),
sh[1689]:   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3019, in ceph_osd_mkfs
sh[1689]:     raise Error('%s failed : %s' % (str(arguments), error))
sh[1689]: ceph_disk.main.Error: Error: ['ceph-osd', '--cluster', 'ceph', '--mkfs', '--mkkey', '-i', u'6', '--monmap', '/var/lib/ceph/tmp/mnt.14kiXz/activate.monmap', '--osd-data', '/var/lib/ceph/tmp/mnt.14kiXz', '--osd-uuid', u'e661b3e9-eb69-4bff-9db2-7c9d802c9664', '--keyring', '/var/lib/ceph/tmp/mnt.14kiXz/keyring',
sh[1689]:  in thread 7fb59b0e9c80 thread_name:ceph-osd
sh[1689]:  ceph version 12.1.0 (330b5d17d66c6c05b08ebc129d3e6e8f92f73c60) luminous (dev)
sh[1689]:  1: (()+0x9bc562) [0x55d59446d562]
sh[1689]:  2: (()+0x110c0) [0x7fb5989010c0]
sh[1689]:  3: (rocksdb::VersionBuilder::SaveTo(rocksdb::VersionStorageInfo*)+0x871) [0x55d59487c8b1]
sh[1689]:  4: (rocksdb::VersionSet::Recover(std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, bool)+0x26bc) [0x55d594760a4c]
sh[1689]:  5: (rocksdb::DBImpl::Recover(std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, bool, bool, bool)+0x11f) [0x55d594727e6f]
sh[1689]:  6: (rocksdb::DB::Open(rocksdb::DBOptions const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std:
sh[1689]:  7: (rocksdb::DB::Open(rocksdb::Options const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, rocksdb:
sh[1689]:  8: (RocksDBStore::do_open(std::ostream&, bool)+0x68e) [0x55d5943b376e]
sh[1689]:  9: (RocksDBStore::create_and_open(std::ostream&)+0xd7) [0x55d5943b4d27]
sh[1689]:  10: (BlueStore::_open_db(bool)+0x326) [0x55d5943406d6]
sh[1689]:  11: (BlueStore::mkfs()+0x856) [0x55d594371406]
sh[1689]:  12: (OSD::mkfs(CephContext*, ObjectStore*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, uuid_d, int)+0x348) [0x55d593ecd8f8]
sh[1689]:  13: (main()+0xe58) [0x55d593e21a78]
sh[1689]:  14: (__libc_start_main()+0xf1) [0x7fb5978b62b1]
sh[1689]:  15: (_start()+0x2a) [0x55d593ea8dfa]
sh[1689]: 2017-07-16 14:45:57.082999 7fb59b0e9c80 -1 *** Caught signal (Illegal instruction) **
sh[1689]:  in thread 7fb59b0e9c80 thread_name:ceph-osd
sh[1689]:  ceph version 12.1.0 (330b5d17d66c6c05b08ebc129d3e6e8f92f73c60) luminous (dev)
sh[1689]:  1: (()+0x9bc562) [0x55d59446d562]
sh[1689]:  2: (()+0x110c0) [0x7fb5989010c0]
sh[1689]:  3: (rocksdb::VersionBuilder::SaveTo(rocksdb::VersionStorageInfo*)+0x871) [0x55d59487c8b1]
sh[1689]:  4: (rocksdb::VersionSet::Recover(std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, bool)+0x26bc) [0x55d594760a4c]
sh[1689]:  5: (rocksdb::DBImpl::Recover(std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, bool, bool, bool)+0x11f) [0x55d594727e6f]
sh[1689]:  6: (rocksdb::DB::Open(rocksdb::DBOptions const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std:
sh[1689]:  7: (rocksdb::DB::Open(rocksdb::Options const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, rocksdb:
sh[1689]:  8: (RocksDBStore::do_open(std::ostream&, bool)+0x68e) [0x55d5943b376e]
sh[1689]:  9: (RocksDBStore::create_and_open(std::ostream&)+0xd7) [0x55d5943b4d27]
sh[1689]:  10: (BlueStore::_open_db(bool)+0x326) [0x55d5943406d6]
sh[1689]:  11: (BlueStore::mkfs()+0x856) [0x55d594371406]
sh[1689]:  12: (OSD::mkfs(CephContext*, ObjectStore*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, uuid_d, int)+0x348) [0x55d593ecd8f8]
sh[1689]:  13: (main()+0xe58) [0x55d593e21a78]
sh[1689]:  14: (__libc_start_main()+0xf1) [0x7fb5978b62b1]
sh[1689]:  15: (_start()+0x2a) [0x55d593ea8dfa]
sh[1689]:  NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
sh[1689]:      0> 2017-07-16 14:45:57.082999 7fb59b0e9c80 -1 *** Caught signal (Illegal instruction) **
sh[1689]:  in thread 7fb59b0e9c80 thread_name:ceph-osd
sh[1689]:  ceph version 12.1.0 (330b5d17d66c6c05b08ebc129d3e6e8f92f73c60) luminous (dev)
sh[1689]:  1: (()+0x9bc562) [0x55d59446d562]
sh[1689]:  2: (()+0x110c0) [0x7fb5989010c0]
sh[1689]:  3: (rocksdb::VersionBuilder::SaveTo(rocksdb::VersionStorageInfo*)+0x871) [0x55d59487c8b1]
sh[1689]:  4: (rocksdb::VersionSet::Recover(std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, bool)+0x26bc) [0x55d594760a4c]
sh[1689]:  5: (rocksdb::DBImpl::Recover(std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, bool, bool, bool)+0x11f) [0x55d594727e6f]
sh[1689]:  6: (rocksdb::DB::Open(rocksdb::DBOptions const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std:
sh[1689]:  7: (rocksdb::DB::Open(rocksdb::Options const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, rocksdb:
sh[1689]:  8: (RocksDBStore::do_open(std::ostream&, bool)+0x68e) [0x55d5943b376e]
sh[1689]:  9: (RocksDBStore::create_and_open(std::ostream&)+0xd7) [0x55d5943b4d27]
sh[1689]:  10: (BlueStore::_open_db(bool)+0x326) [0x55d5943406d6]
sh[1689]:  11: (BlueStore::mkfs()+0x856) [0x55d594371406]
sh[1689]:  12: (OSD::mkfs(CephContext*, ObjectStore*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, uuid_d, int)+0x348) [0x55d593ecd8f8]
sh[1689]:  13: (main()+0xe58) [0x55d593e21a78]
sh[1689]:  14: (__libc_start_main()+0xf1) [0x7fb5978b62b1]
sh[1689]:  15: (_start()+0x2a) [0x55d593ea8dfa]
sh[1689]:  NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
sh[1689]: Traceback (most recent call last):
sh[1689]:   File "/usr/sbin/ceph-disk", line 11, in <module>
sh[1689]:     load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
sh[1689]:   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5687, in run
sh[1689]:     main(sys.argv[1:])
sh[1689]:   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5638, in main
sh[1689]:     args.func(args)
sh[1689]:   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 4847, in main_trigger
sh[1689]:     raise Error('return code ' + str(ret))
sh[1689]: ceph_disk.main.Error: Error: return code 1
systemd[1]: ceph-disk@dev-sdc1.service: Main process exited, code=exited, status=1/FAILURE
systemd[1]: Failed to start Ceph disk activation: /dev/sdc1.
systemd[1]: ceph-disk@dev-sdc1.service: Unit entered failed state.
systemd[1]: ceph-disk@dev-sdc1.service: Failed with result 'exit-code'.
pvedaemon[3077]: <root@pam> end task UPID:varys:00007E7D:0004F489:596B5FCE:cephcreateosd:sdc:root@pam: OK
systemd[1]: Starting Ceph disk activation: /dev/sdc1...
sh[1869]: main_trigger: main_trigger: Namespace(cluster='ceph', dev='/dev/sdc1', dmcrypt=None, dmcrypt_key_dir='/etc/ceph/dmcrypt-keys', func=<function main_trigger at 0x7fefa4b11938>, log_stdout=True, prepend_to_path='/usr/bin', prog='ceph-disk', setgroup=None, setuser=None, statedir='/var/lib/ceph', sync=True,
sh[1869]: command: Running command: /sbin/init --version
sh[1869]: command_check_call: Running command: /bin/chown ceph:ceph /dev/sdc1
sh[1869]: command: Running command: /sbin/blkid -o udev -p /dev/sdc1
sh[1869]: command: Running command: /sbin/blkid -o udev -p /dev/sdc1
sh[1869]: main_trigger: trigger /dev/sdc1 parttype 4fbd7e29-9d25-41b8-afd0-062c0ceff664 uuid e661b3e9-eb69-4bff-9db2-7c9d802c9664
sh[1869]: command: Running command: /usr/sbin/ceph-disk --verbose activate /dev/sdc1
sh[1869]: main_trigger:
sh[1869]: main_trigger: main_activate: path = /dev/sdc1
sh[1869]: get_dm_uuid: get_dm_uuid /dev/sdc1 uuid path is /sys/dev/block/8:33/dm/uuid
sh[1869]: command: Running command: /sbin/blkid -o udev -p /dev/sdc1
sh[1869]: command: Running command: /sbin/blkid -p -s TYPE -o value -- /dev/sdc1
sh[1869]: command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
sh[1869]: command: Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
sh[1869]: mount: Mounting /dev/sdc1 on /var/lib/ceph/tmp/mnt.AfKpC5 with options noatime,inode64
sh[1869]: command_check_call: Running command: /bin/mount -t xfs -o noatime,inode64 -- /dev/sdc1 /var/lib/ceph/tmp/mnt.AfKpC5
sh[1869]: activate: Cluster uuid is a5c0cfed-31c1-41db-b567-184bf936664
sh[1869]: command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
sh[1869]: activate: Cluster name is ceph
sh[1869]: activate: OSD uuid is e661b3e9-eb69-4bff-9db2-7c9d802c9664
sh[1869]: activate: OSD id is 6
sh[1869]: activate: Initializing OSD...
sh[1869]: command_check_call: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/tmp/mnt.AfKpC5/activate.monmap
sh[1869]: got monmap epoch 4
sh[1869]: command: Running command: /usr/bin/timeout 300 ceph-osd --cluster ceph --mkfs --mkkey -i 6 --monmap /var/lib/ceph/tmp/mnt.AfKpC5/activate.monmap --osd-data /var/lib/ceph/tmp/mnt.AfKpC5 --osd-uuid
sh[1869]: mount_activate: Failed to activate
sh[1869]: unmount: Unmounting /var/lib/ceph/tmp/mnt.AfKpC5
sh[1869]: command_check_call: Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.AfKpC5
sh[1869]: Traceback (most recent call last):
sh[1869]:   File "/usr/sbin/ceph-disk", line 11, in <module>
sh[1869]:     load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
sh[1869]:   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5687, in run
sh[1869]:     main(sys.argv[1:])
sh[1869]:   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5638, in main
sh[1869]:     args.func(args)
sh[1869]:   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3710, in main_activate
sh[1869]:     reactivate=args.reactivate,
sh[1869]:   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3467, in mount_activate
sh[1869]:     (osd_id, cluster) = activate(path, activate_key_template, init)
sh[1869]:   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3643, in activate
sh[1869]:     keyring=keyring,
sh[1869]:   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3055, in mkfs
sh[1869]:     '--setgroup', get_ceph_group(),
sh[1869]:   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 3019, in ceph_osd_mkfs
sh[1869]:     raise Error('%s failed : %s' % (str(arguments), error))
sh[1869]: ceph_disk.main.Error: Error: ['ceph-osd', '--cluster', 'ceph', '--mkfs', '--mkkey', '-i', u'6', '--monmap', '/var/lib/ceph/tmp/mnt.AfKpC5/activate.monmap', '--osd-data', '/var/lib/ceph/tmp/mnt.AfKpC5', '--osd-uuid', u'e661b3e9-eb69-4bff-9db2-7c9d802c9664', '--keyring', '/var/lib/ceph/tmp/mnt.AfKpC5/keyring',
sh[1869]:  in thread 7f6d85db3c80 thread_name:ceph-osd
sh[1869]:  ceph version 12.1.0 (330b5d17d66c6c05b08ebc129d3e6e8f92f73c60) luminous (dev)
sh[1869]:  1: (()+0x9bc562) [0x558561169562]
sh[1869]:  2: (()+0x110c0) [0x7f6d835cb0c0]
sh[1869]:  3: (rocksdb::VersionBuilder::SaveTo(rocksdb::VersionStorageInfo*)+0x871) [0x5585615788b1]
sh[1869]:  4: (rocksdb::VersionSet::Recover(std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, bool)+0x26bc) [0x55856145ca4c]
sh[1869]:  5: (rocksdb::DBImpl::Recover(std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, bool, bool, bool)+0x11f) [0x558561423e6f]
sh[1869]:  6: (rocksdb::DB::Open(rocksdb::DBOptions const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std:
sh[1869]:  7: (rocksdb::DB::Open(rocksdb::Options const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, rocksdb:
sh[1869]:  8: (RocksDBStore::do_open(std::ostream&, bool)+0x68e) [0x5585610af76e]
sh[1869]:  9: (RocksDBStore::create_and_open(std::ostream&)+0xd7) [0x5585610b0d27]
sh[1869]:  10: (BlueStore::_open_db(bool)+0x326) [0x55856103c6d6]
sh[1869]:  11: (BlueStore::mkfs()+0x856) [0x55856106d406]
sh[1869]:  12: (OSD::mkfs(CephContext*, ObjectStore*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, uuid_d, int)+0x348) [0x558560bc98f8]
sh[1869]:  13: (main()+0xe58) [0x558560b1da78]
sh[1869]:  14: (__libc_start_main()+0xf1) [0x7f6d825802b1]
sh[1869]:  15: (_start()+0x2a) [0x558560ba4dfa]
sh[1869]: 2017-07-16 14:46:00.763521 7f6d85db3c80 -1 *** Caught signal (Illegal instruction) **
sh[1869]:  in thread 7f6d85db3c80 thread_name:ceph-osd
sh[1869]:  ceph version 12.1.0 (330b5d17d66c6c05b08ebc129d3e6e8f92f73c60) luminous (dev)
sh[1869]:  1: (()+0x9bc562) [0x558561169562]
sh[1869]:  2: (()+0x110c0) [0x7f6d835cb0c0]
sh[1869]:  3: (rocksdb::VersionBuilder::SaveTo(rocksdb::VersionStorageInfo*)+0x871) [0x5585615788b1]
sh[1869]:  4: (rocksdb::VersionSet::Recover(std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, bool)+0x26bc) [0x55856145ca4c]
sh[1869]:  5: (rocksdb::DBImpl::Recover(std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, bool, bool, bool)+0x11f) [0x558561423e6f]
sh[1869]:  6: (rocksdb::DB::Open(rocksdb::DBOptions const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std:
sh[1869]:  7: (rocksdb::DB::Open(rocksdb::Options const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, rocksdb:
sh[1869]:  8: (RocksDBStore::do_open(std::ostream&, bool)+0x68e) [0x5585610af76e]
sh[1869]:  9: (RocksDBStore::create_and_open(std::ostream&)+0xd7) [0x5585610b0d27]
sh[1869]:  10: (BlueStore::_open_db(bool)+0x326) [0x55856103c6d6]
sh[1869]:  11: (BlueStore::mkfs()+0x856) [0x55856106d406]
sh[1869]:  12: (OSD::mkfs(CephContext*, ObjectStore*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, uuid_d, int)+0x348) [0x558560bc98f8]
sh[1869]:  13: (main()+0xe58) [0x558560b1da78]
sh[1869]:  14: (__libc_start_main()+0xf1) [0x7f6d825802b1]
sh[1869]:  15: (_start()+0x2a) [0x558560ba4dfa]
sh[1869]:  NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
sh[1869]:      0> 2017-07-16 14:46:00.763521 7f6d85db3c80 -1 *** Caught signal (Illegal instruction) **
sh[1869]:  in thread 7f6d85db3c80 thread_name:ceph-osd
sh[1869]:  ceph version 12.1.0 (330b5d17d66c6c05b08ebc129d3e6e8f92f73c60) luminous (dev)
sh[1869]:  1: (()+0x9bc562) [0x558561169562]
sh[1869]:  2: (()+0x110c0) [0x7f6d835cb0c0]
sh[1869]:  3: (rocksdb::VersionBuilder::SaveTo(rocksdb::VersionStorageInfo*)+0x871) [0x5585615788b1]
sh[1869]:  4: (rocksdb::VersionSet::Recover(std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, bool)+0x26bc) [0x55856145ca4c]
sh[1869]:  5: (rocksdb::DBImpl::Recover(std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, bool, bool, bool)+0x11f) [0x558561423e6f]
sh[1869]:  6: (rocksdb::DB::Open(rocksdb::DBOptions const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std:
sh[1869]:  7: (rocksdb::DB::Open(rocksdb::Options const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, rocksdb:
sh[1869]:  8: (RocksDBStore::do_open(std::ostream&, bool)+0x68e) [0x5585610af76e]
sh[1869]:  9: (RocksDBStore::create_and_open(std::ostream&)+0xd7) [0x5585610b0d27]
sh[1869]:  10: (BlueStore::_open_db(bool)+0x326) [0x55856103c6d6]
sh[1869]:  11: (BlueStore::mkfs()+0x856) [0x55856106d406]
sh[1869]:  12: (OSD::mkfs(CephContext*, ObjectStore*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, uuid_d, int)+0x348) [0x558560bc98f8]
sh[1869]:  13: (main()+0xe58) [0x558560b1da78]
sh[1869]:  14: (__libc_start_main()+0xf1) [0x7f6d825802b1]
sh[1869]:  15: (_start()+0x2a) [0x558560ba4dfa]
sh[1869]:  NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
sh[1869]: Traceback (most recent call last):
sh[1869]:   File "/usr/sbin/ceph-disk", line 11, in <module>
sh[1869]:     load_entry_point('ceph-disk==1.0.0', 'console_scripts', 'ceph-disk')()
sh[1869]:   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5687, in run
sh[1869]:     main(sys.argv[1:])
sh[1869]:   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 5638, in main
sh[1869]:     args.func(args)
sh[1869]:   File "/usr/lib/python2.7/dist-packages/ceph_disk/main.py", line 4847, in main_trigger
sh[1869]:     raise Error('return code ' + str(ret))
sh[1869]: ceph_disk.main.Error: Error: return code 1
systemd[1]: ceph-disk@dev-sdc1.service: Main process exited, code=exited, status=1/FAILURE
systemd[1]: Failed to start Ceph disk activation: /dev/sdc1.
systemd[1]: ceph-disk@dev-sdc1.service: Unit entered failed state.
systemd[1]: ceph-disk@dev-sdc1.service: Failed with result 'exit-code'.


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux