Re: One OSD flapping

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



SOLVED

Short: I followed the procedure to replace an OSD.

Long: I reweighted the flapping OSD to 0 until it was done, then marked it out, and then unmounted it, then followed the replacement procedure[1], then restore weight. I attempted to recreate my actions from bash history from various terminals. Please excuse any mistakes:
admin:$ X=0  # ID of OSD
admin:$ ceph osd crush reweight osd.$X 0.0
admin:$ ceph -w  # observe data migration
admin:$ ceph osd out $X
osd:$ systemctl stop ceph.target
admin:$ ceph osd destroy $X --yes-i-really-mean-it
osd:$ umount /dev/sdb1
osd:$ ceph-disk zap /dev/sdb
osd:$ ceph-disk prepare --bluestore /dev/sdb  --osd-id 0 --osd-uuid `uuidgen`
osd:$ ceph-disk activate /dev/sdb1
admin:$ ceph osd crush reweight osd.0 9.09560
admin:$ ceph -w    # observe data migration
# I also found it helpful to set 'nodown' for several minutes because two of the OSDs kept marking eachother down.
admin:$ ceph osd set nodown
# Once all was 'active+clean', I removed the 'nodown' flag.
admin:$ ceph osd unset nodown

And for the first time since upgrading from Kraken to all the Luminous releases through 12.1.2, my cluster is finally HEALTH_OK. Yay!
roger@desktop:~/ceph-cluster$ ceph -w
  cluster:
    id:     eea7b78c-b138-40fc-9f3e-3d77afb770f0
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum nuc1,nuc2,nuc3
    mgr: nuc3(active), standbys: nuc1, nuc2
    osd: 3 osds: 3 up, 3 in
    rgw: 1 daemon active
 
  data:
    pools:   19 pools, 372 pgs
    objects: 54278 objects, 71724 MB
    usage:   121 GB used, 27820 GB / 27941 GB avail
    pgs:     372 active+clean
 
1. http://docs.ceph.com/docs/master/rados/operations/add-or-rm-osds/#replacing-an-osd



On Wed, Aug 2, 2017 at 11:08 AM Roger Brown <rogerpbrown@xxxxxxxxx> wrote:
Hi,

My OSD's were continuously crashing in cephx_verify_authorizer() while on Luminous v12.1.0 and v12.1.1, but the crashes stopped once I upgraded to v12.1.2.

Now however, one of my OSDs is continuing to crash. Looking closer, the crash reason is different reason and started with v12.1.1.

I've been troubleshooting with the aid of http://docs.ceph.com/docs/master/rados/troubleshooting/troubleshooting-osd/.

I'm considering reweight to 0 and then redeploy that OSD from scratch, unless you can do a filesystem repair on bluestore/rocksdb somehow. Please advise.


Data follows...

Log:
roger@osd3:~$ sudo journalctl -u ceph-osd@0 --no-pager
...
Aug 02 10:38:47 osd3 systemd[1]: ceph-osd@0.service: Failed with result 'signal'.
Aug 02 10:39:07 osd3 systemd[1]: ceph-osd@0.service: Service hold-off time over, scheduling restart.
Aug 02 10:39:07 osd3 systemd[1]: Stopped Ceph object storage daemon osd.0.
Aug 02 10:39:07 osd3 systemd[1]: Starting Ceph object storage daemon osd.0...
Aug 02 10:39:07 osd3 systemd[1]: Started Ceph object storage daemon osd.0.
Aug 02 10:39:07 osd3 ceph-osd[7413]: starting osd.0 at - osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Aug 02 10:40:48 osd3 ceph-osd[7413]: 2017-08-02 10:40:48.583063 7f5262cc3e00 -1 osd.0 25924 log_to_monitors {default=true}
Aug 02 10:43:32 osd3 ceph-osd[7413]: *** Caught signal (Aborted) **
Aug 02 10:43:32 osd3 ceph-osd[7413]:  in thread 7f524861b700 thread_name:tp_osd_tp
Aug 02 10:43:32 osd3 ceph-osd[7413]:  ceph version 12.1.2 (b661348f156f148d764b998b65b90451f096cb27) luminous (rc)
Aug 02 10:43:32 osd3 ceph-osd[7413]:  1: (()+0xa9a964) [0x5623f0a9c964]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  2: (()+0x11390) [0x7f52611a6390]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  3: (pread64()+0x33) [0x7f52611a5d43]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  4: (KernelDevice::direct_read_unaligned(unsigned long, unsigned long, char*)+0x81) [0x5623f0a7cfc1]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  5: (KernelDevice::read_random(unsigned long, unsigned long, char*, bool)+0x4f3) [0x5623f0a7da43]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  6: (BlueFS::_read_random(BlueFS::FileReader*, unsigned long, unsigned long, char*)+0x4fa) [0x5623f0a4d9ca]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  7: (BlueRocksRandomAccessFile::Read(unsigned long, unsigned long, rocksdb::Slice*, char*) const+0x20) [0x5623f0a77e10]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  8: (rocksdb::RandomAccessFileReader::Read(unsigned long, unsigned long, rocksdb::Slice*, char*) const+0xf8f) [0x5623f0e50acf]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  9: (rocksdb::ReadBlockContents(rocksdb::RandomAccessFileReader*, rocksdb::Footer const&, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::BlockContents*, rocksdb::ImmutableCFOptions const&, bool, rocksdb::Slice const&, rocksdb::PersistentCacheOptions const&)+0x5f3) [0x5623f0e21383]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  10: (()+0xe0f7c6) [0x5623f0e117c6]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  11: (rocksdb::BlockBasedTable::MaybeLoadDataBlockToCache(rocksdb::BlockBasedTable::Rep*, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::Slice, rocksdb::BlockBasedTable::CachableEntry<rocksdb::Block>*, bool)+0x2f8) [0x5623f0e13928]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  12: (rocksdb::BlockBasedTable::NewDataBlockIterator(rocksdb::BlockBasedTable::Rep*, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::BlockIter*, bool, rocksdb::Status)+0x2ac) [0x5623f0e13d2c]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  13: (rocksdb::BlockBasedTable::BlockEntryIteratorState::NewSecondaryIterator(rocksdb::Slice const&)+0x97) [0x5623f0e1c4a7]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  14: (()+0xe4576e) [0x5623f0e4776e]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  15: (()+0xe45836) [0x5623f0e47836]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  16: (()+0xe459b1) [0x5623f0e479b1]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  17: (rocksdb::MergingIterator::Next()+0x449) [0x5623f0e2ab09]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  18: (rocksdb::DBIter::FindNextUserEntryInternal(bool, bool)+0x182) [0x5623f0ec7ed2]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  19: (rocksdb::DBIter::Next()+0x1eb) [0x5623f0ec8c8b]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  20: (RocksDBStore::RocksDBWholeSpaceIteratorImpl::next()+0x9a) [0x5623f09dd58a]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  21: (BlueStore::_collection_list(BlueStore::Collection*, ghobject_t const&, ghobject_t const&, int, std::vector<ghobject_t, std::allocator<ghobject_t> >*, ghobject_t*)+0x1170) [0x5623f093d250]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  22: (BlueStore::collection_list(boost::intrusive_ptr<ObjectStore::CollectionImpl>&, ghobject_t const&, ghobject_t const&, int, std::vector<ghobject_t, std::allocator<ghobject_t> >*, ghobject_t*)+0x25a) [0x5623f093e6ea]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  23: (PGBackend::objects_list_range(hobject_t const&, hobject_t const&, snapid_t, std::vector<hobject_t, std::allocator<hobject_t> >*, std::vector<ghobject_t, std::allocator<ghobject_t> >*)+0x192) [0x5623f0700ef2]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  24: (PG::build_scrub_map_chunk(ScrubMap&, hobject_t, hobject_t, bool, unsigned int, ThreadPool::TPHandle&)+0x200) [0x5623f05a8b30]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  25: (PG::chunky_scrub(ThreadPool::TPHandle&)+0x3ea) [0x5623f05d61ca]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  26: (PG::scrub(unsigned int, ThreadPool::TPHandle&)+0x45c) [0x5623f05d7cec]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  27: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x12d0) [0x5623f05179e0]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  28: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x884) [0x5623f0ae44e4]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  29: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x5623f0ae7520]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  30: (()+0x76ba) [0x7f526119c6ba]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  31: (clone()+0x6d) [0x7f52602133dd]
Aug 02 10:43:32 osd3 ceph-osd[7413]: 2017-08-02 10:43:32.251841 7f524861b700 -1 *** Caught signal (Aborted) **
Aug 02 10:43:32 osd3 ceph-osd[7413]:  in thread 7f524861b700 thread_name:tp_osd_tp
Aug 02 10:43:32 osd3 ceph-osd[7413]:  ceph version 12.1.2 (b661348f156f148d764b998b65b90451f096cb27) luminous (rc)
Aug 02 10:43:32 osd3 ceph-osd[7413]:  1: (()+0xa9a964) [0x5623f0a9c964]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  2: (()+0x11390) [0x7f52611a6390]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  3: (pread64()+0x33) [0x7f52611a5d43]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  4: (KernelDevice::direct_read_unaligned(unsigned long, unsigned long, char*)+0x81) [0x5623f0a7cfc1]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  5: (KernelDevice::read_random(unsigned long, unsigned long, char*, bool)+0x4f3) [0x5623f0a7da43]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  6: (BlueFS::_read_random(BlueFS::FileReader*, unsigned long, unsigned long, char*)+0x4fa) [0x5623f0a4d9ca]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  7: (BlueRocksRandomAccessFile::Read(unsigned long, unsigned long, rocksdb::Slice*, char*) const+0x20) [0x5623f0a77e10]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  8: (rocksdb::RandomAccessFileReader::Read(unsigned long, unsigned long, rocksdb::Slice*, char*) const+0xf8f) [0x5623f0e50acf]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  9: (rocksdb::ReadBlockContents(rocksdb::RandomAccessFileReader*, rocksdb::Footer const&, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::BlockContents*, rocksdb::ImmutableCFOptions const&, bool, rocksdb::Slice const&, rocksdb::PersistentCacheOptions const&)+0x5f3) [0x5623f0e21383]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  10: (()+0xe0f7c6) [0x5623f0e117c6]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  11: (rocksdb::BlockBasedTable::MaybeLoadDataBlockToCache(rocksdb::BlockBasedTable::Rep*, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::Slice, rocksdb::BlockBasedTable::CachableEntry<rocksdb::Block>*, bool)+0x2f8) [0x5623f0e13928]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  12: (rocksdb::BlockBasedTable::NewDataBlockIterator(rocksdb::BlockBasedTable::Rep*, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::BlockIter*, bool, rocksdb::Status)+0x2ac) [0x5623f0e13d2c]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  13: (rocksdb::BlockBasedTable::BlockEntryIteratorState::NewSecondaryIterator(rocksdb::Slice const&)+0x97) [0x5623f0e1c4a7]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  14: (()+0xe4576e) [0x5623f0e4776e]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  15: (()+0xe45836) [0x5623f0e47836]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  16: (()+0xe459b1) [0x5623f0e479b1]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  17: (rocksdb::MergingIterator::Next()+0x449) [0x5623f0e2ab09]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  18: (rocksdb::DBIter::FindNextUserEntryInternal(bool, bool)+0x182) [0x5623f0ec7ed2]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  19: (rocksdb::DBIter::Next()+0x1eb) [0x5623f0ec8c8b]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  20: (RocksDBStore::RocksDBWholeSpaceIteratorImpl::next()+0x9a) [0x5623f09dd58a]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  21: (BlueStore::_collection_list(BlueStore::Collection*, ghobject_t const&, ghobject_t const&, int, std::vector<ghobject_t, std::allocator<ghobject_t> >*, ghobject_t*)+0x1170) [0x5623f093d250]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  22: (BlueStore::collection_list(boost::intrusive_ptr<ObjectStore::CollectionImpl>&, ghobject_t const&, ghobject_t const&, int, std::vector<ghobject_t, std::allocator<ghobject_t> >*, ghobject_t*)+0x25a) [0x5623f093e6ea]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  23: (PGBackend::objects_list_range(hobject_t const&, hobject_t const&, snapid_t, std::vector<hobject_t, std::allocator<hobject_t> >*, std::vector<ghobject_t, std::allocator<ghobject_t> >*)+0x192) [0x5623f0700ef2]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  24: (PG::build_scrub_map_chunk(ScrubMap&, hobject_t, hobject_t, bool, unsigned int, ThreadPool::TPHandle&)+0x200) [0x5623f05a8b30]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  25: (PG::chunky_scrub(ThreadPool::TPHandle&)+0x3ea) [0x5623f05d61ca]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  26: (PG::scrub(unsigned int, ThreadPool::TPHandle&)+0x45c) [0x5623f05d7cec]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  27: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x12d0) [0x5623f05179e0]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  28: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x884) [0x5623f0ae44e4]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  29: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x5623f0ae7520]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  30: (()+0x76ba) [0x7f526119c6ba]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  31: (clone()+0x6d) [0x7f52602133dd]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Aug 02 10:43:32 osd3 ceph-osd[7413]:      0> 2017-08-02 10:43:32.251841 7f524861b700 -1 *** Caught signal (Aborted) **
Aug 02 10:43:32 osd3 ceph-osd[7413]:  in thread 7f524861b700 thread_name:tp_osd_tp
Aug 02 10:43:32 osd3 ceph-osd[7413]:  ceph version 12.1.2 (b661348f156f148d764b998b65b90451f096cb27) luminous (rc)
Aug 02 10:43:32 osd3 ceph-osd[7413]:  1: (()+0xa9a964) [0x5623f0a9c964]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  2: (()+0x11390) [0x7f52611a6390]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  3: (pread64()+0x33) [0x7f52611a5d43]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  4: (KernelDevice::direct_read_unaligned(unsigned long, unsigned long, char*)+0x81) [0x5623f0a7cfc1]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  5: (KernelDevice::read_random(unsigned long, unsigned long, char*, bool)+0x4f3) [0x5623f0a7da43]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  6: (BlueFS::_read_random(BlueFS::FileReader*, unsigned long, unsigned long, char*)+0x4fa) [0x5623f0a4d9ca]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  7: (BlueRocksRandomAccessFile::Read(unsigned long, unsigned long, rocksdb::Slice*, char*) const+0x20) [0x5623f0a77e10]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  8: (rocksdb::RandomAccessFileReader::Read(unsigned long, unsigned long, rocksdb::Slice*, char*) const+0xf8f) [0x5623f0e50acf]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  9: (rocksdb::ReadBlockContents(rocksdb::RandomAccessFileReader*, rocksdb::Footer const&, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::BlockContents*, rocksdb::ImmutableCFOptions const&, bool, rocksdb::Slice const&, rocksdb::PersistentCacheOptions const&)+0x5f3) [0x5623f0e21383]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  10: (()+0xe0f7c6) [0x5623f0e117c6]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  11: (rocksdb::BlockBasedTable::MaybeLoadDataBlockToCache(rocksdb::BlockBasedTable::Rep*, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::Slice, rocksdb::BlockBasedTable::CachableEntry<rocksdb::Block>*, bool)+0x2f8) [0x5623f0e13928]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  12: (rocksdb::BlockBasedTable::NewDataBlockIterator(rocksdb::BlockBasedTable::Rep*, rocksdb::ReadOptions const&, rocksdb::BlockHandle const&, rocksdb::BlockIter*, bool, rocksdb::Status)+0x2ac) [0x5623f0e13d2c]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  13: (rocksdb::BlockBasedTable::BlockEntryIteratorState::NewSecondaryIterator(rocksdb::Slice const&)+0x97) [0x5623f0e1c4a7]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  14: (()+0xe4576e) [0x5623f0e4776e]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  15: (()+0xe45836) [0x5623f0e47836]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  16: (()+0xe459b1) [0x5623f0e479b1]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  17: (rocksdb::MergingIterator::Next()+0x449) [0x5623f0e2ab09]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  18: (rocksdb::DBIter::FindNextUserEntryInternal(bool, bool)+0x182) [0x5623f0ec7ed2]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  19: (rocksdb::DBIter::Next()+0x1eb) [0x5623f0ec8c8b]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  20: (RocksDBStore::RocksDBWholeSpaceIteratorImpl::next()+0x9a) [0x5623f09dd58a]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  21: (BlueStore::_collection_list(BlueStore::Collection*, ghobject_t const&, ghobject_t const&, int, std::vector<ghobject_t, std::allocator<ghobject_t> >*, ghobject_t*)+0x1170) [0x5623f093d250]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  22: (BlueStore::collection_list(boost::intrusive_ptr<ObjectStore::CollectionImpl>&, ghobject_t const&, ghobject_t const&, int, std::vector<ghobject_t, std::allocator<ghobject_t> >*, ghobject_t*)+0x25a) [0x5623f093e6ea]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  23: (PGBackend::objects_list_range(hobject_t const&, hobject_t const&, snapid_t, std::vector<hobject_t, std::allocator<hobject_t> >*, std::vector<ghobject_t, std::allocator<ghobject_t> >*)+0x192) [0x5623f0700ef2]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  24: (PG::build_scrub_map_chunk(ScrubMap&, hobject_t, hobject_t, bool, unsigned int, ThreadPool::TPHandle&)+0x200) [0x5623f05a8b30]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  25: (PG::chunky_scrub(ThreadPool::TPHandle&)+0x3ea) [0x5623f05d61ca]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  26: (PG::scrub(unsigned int, ThreadPool::TPHandle&)+0x45c) [0x5623f05d7cec]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  27: (OSD::ShardedOpWQ::_process(unsigned int, ceph::heartbeat_handle_d*)+0x12d0) [0x5623f05179e0]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  28: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0x884) [0x5623f0ae44e4]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  29: (ShardedThreadPool::WorkThreadSharded::entry()+0x10) [0x5623f0ae7520]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  30: (()+0x76ba) [0x7f526119c6ba]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  31: (clone()+0x6d) [0x7f52602133dd]
Aug 02 10:43:32 osd3 ceph-osd[7413]:  NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Aug 02 10:43:32 osd3 systemd[1]: ceph-osd@0.service: Main process exited, code=killed, status=6/ABRT
Aug 02 10:43:32 osd3 systemd[1]: ceph-osd@0.service: Unit entered failed state.
Aug 02 10:43:32 osd3 systemd[1]: ceph-osd@0.service: Failed with result 'signal'.
Aug 02 10:43:52 osd3 systemd[1]: ceph-osd@0.service: Service hold-off time over, scheduling restart.
Aug 02 10:43:52 osd3 systemd[1]: Stopped Ceph object storage daemon osd.0.
Aug 02 10:43:52 osd3 systemd[1]: Starting Ceph object storage daemon osd.0...
Aug 02 10:43:52 osd3 systemd[1]: Started Ceph object storage daemon osd.0.
Aug 02 10:43:52 osd3 ceph-osd[8322]: starting osd.0 at - osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal

roger@desktop:~$ ceph -s
  cluster:
    id:     eea7b78c-b138-40fc-9f3e-3d77afb770f0
    health: HEALTH_WARN
            1 osds down
            1 host (1 osds) down
            Degraded data redundancy: 43922/162834 objects degraded (26.973%), 300 pgs unclean, 305 pgs degraded
            114 pgs not deep-scrubbed for 86400
            155 pgs not scrubbed for 86400
            10 slow requests are blocked > 32 sec
 
  services:
    mon: 3 daemons, quorum nuc1,nuc2,nuc3
    mgr: nuc3(active), standbys: nuc2, nuc1
    osd: 3 osds: 2 up, 3 in
    rgw: 1 daemon active
 
  data:
    pools:   19 pools, 372 pgs
    objects: 54278 objects, 71724 MB
    usage:   122 GB used, 27819 GB / 27941 GB avail
    pgs:     43922/162834 objects degraded (26.973%)
             303 active+undersized+degraded
             67  active+clean
             2   active+recovery_wait+degraded

roger@desktop:~$ ceph osd tree
ID CLASS WEIGHT   TYPE NAME     STATUS REWEIGHT PRI-AFF 
-1       27.28679 root default                          
-5        9.09560     host osd1                         
 3   hdd  9.09560         osd.3     up  1.00000 1.00000 
-6        9.09560     host osd2                         
 4   hdd  9.09560         osd.4     up  1.00000 1.00000 
-2        9.09560     host osd3                         
 0   hdd  9.09560         osd.0   down  1.00000 1.00000 

roger@desktop:~$ ceph mon versions
{
    "ceph version 12.1.2 (b661348f156f148d764b998b65b90451f096cb27) luminous (rc)": 3
}
roger@desktop:~$ ceph osd versions
{
    "ceph version 12.1.2 (b661348f156f148d764b998b65b90451f096cb27) luminous (rc)": 2
}

roger@osd3:~$ sudo ceph daemon osd.0 status
{
    "cluster_fsid": "eea7b78c-b138-40fc-9f3e-3d77afb770f0",
    "osd_fsid": "bdb31a03-e381-4bf8-82e3-18916c838308",
    "whoami": 0,
    "state": "waiting_for_healthy",
    "oldest_map": 25389,
    "newest_map": 25938,
    "num_pgs": 372
}

roger@desktop:~$ ceph df
GLOBAL:
    SIZE       AVAIL      RAW USED     %RAW USED 
    27941G     27819G         122G          0.44 
POOLS:
    NAME                           ID      USED       %USED     MAX AVAIL     OBJECTS 
    default.rgw.rgw.gc             70           0         0         8807G           0 
    default.rgw.buckets.non-ec     83           0         0         8807G          43 
    default.rgw.control            85           0         0         8807G           8 
    default.rgw.data.root          86       15601         0         8807G          49 
    default.rgw.gc                 87           0         0         8807G          32 
    default.rgw.lc                 88           0         0         8807G          32 
    default.rgw.log                89           0         0         8807G         144 
    default.rgw.users.uid          90        3346         0         8807G          14 
    default.rgw.users.email        91         100         0         8807G           7 
    default.rgw.users.keys         92         100         0         8807G           7 
    default.rgw.buckets.index      93           0         0         8807G          39 
    default.rgw.intent-log         95           0         0         8807G           0 
    default.rgw.meta               96           0         0         8807G           0 
    default.rgw.usage              97           0         0         8807G           0 
    default.rgw.users.swift        98          39         0         8807G           4 
    default.rgw.buckets.extra      99           0         0         8807G           0 
    .rgw.root                      100       1681         0         8807G           4 
    default.rgw.reshard            101          0         0         8807G          17 
    default.rgw.buckets.data       103     71724M      0.40        17614G       53878 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux