Re: 回复: Re: OSDs continuously restarting under load

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



badblocks has found over 50 bad sectors so far and still running.
xfs_repair stopped running twice with a message "Killed" likely indicating
that it hit a similar bus error that ceph-osd is running into. This seems
like a fairly simple case of failing disks. I just hope I can get through
it without data loss.

On Thu, Apr 30, 2020 at 10:14 PM David Turner <drakonstein@xxxxxxxxx> wrote:

> I have 2 filestore OSDs in a cluster facing "Caught signal (Bus error)" as
> well and can't find anything about it. Ceph 12.2.12. The disks are less
> than 50% full and basic writes have been successful. Both disks are on
> different nodes. The other 14 disks on each node are unaffected.
>
> Restarting the node doesn't change the behavior. The affected OSD still
> crashes and the other 14 start fine (which likely rules out the controller
> and other shared components along those lines).
>
> I've attempted [1] these commands on the OSDs to see how much of the disk
> I could access cleanly. The first is just to flush the journal to disk and
> it crashed out with the same error. The second command is to compact the DB
> which also crashed with the same error. On one of the OSDs I was able to
> make it a fair bit into compacting the DB before it crashed the first time
> and now it crashes instantly.
>
> That leads me to think that it might have gotten to a specific part of the
> disk and/or filesystem that is having problems. I'm currently running [2]
> xfs_repair on one of the disks to see if it might be the filesystem. On the
> other disk I'm running [3] badblocks to check for problems with underlying
> sectors.
>
> I'm assuming that if it's a bad block on the disk that is preventing the
> disk from starting that there's really nothing that I can do to recover the
> OSD and I'll just need to export any PGs on the disks that aren't active.
> Here's hoping I make it through this without data loss. Since I started
> this data migration I've already lost a couple disks (completely unreadable
> by the OS so I can't get copies of the PGs off of them). Luckily these ones
> seem like I might be able to access that part of the data at least. As
> well, I only have some unfound objects at the moment, but all of my PGs are
> active, which is an improvement.
>
>
> [1] sudo -u ceph ceph-osd -i 285 --flush-journal
> sudo -u ceph ceph-kvstore-tool leveldb
> /var/lib/ceph/osd/ceph-285/current/omap compact
>
> [2] xfs_repair -n /dev/sdi1
> [3] badblocks -b 4096 -v /dev/sdn
>
> On Thu, Mar 19, 2020 at 9:04 AM huxiaoyu@xxxxxxxxxxxx <
> huxiaoyu@xxxxxxxxxxxx> wrote:
>
>> Hi, Igor,
>>
>> thanks for the tip. Dmesg does not say any suspicious information.
>>
>> I will investigate whether hardware has any problem or not.
>>
>> best regards,
>>
>> samuel
>>
>>
>>
>>
>>
>> huxiaoyu@xxxxxxxxxxxx
>>
>> 发件人: Igor Fedotov
>> 发送时间: 2020-03-19 12:07
>> 收件人: huxiaoyu@xxxxxxxxxxxx; ceph-users; ceph-users
>> 主题: Re:  OSDs continuously restarting under load
>> Hi, Samuel,
>>
>> I've never seen that sort of signal in the real life:
>>
>> 2020-03-18 18:39:26.426584 201e35fdb40 -1 *** Caught signal (Bus error) **
>>
>>
>> I suppose this has some hardware roots. Have you checked dmesg output?
>>
>>
>> Just in case, here is some info on "Bus Error" signal, may be it will
>> provide some insight: https://en.wikipedia.org/wiki/Bus_error
>>
>>
>> Thanks,
>>
>> Igor
>>
>>
>> On 3/18/2020 5:06 PM, huxiaoyu@xxxxxxxxxxxx wrote:
>> > Hello, folks,
>> >
>> > I am trying to add a ceph node into an existing ceph cluster. Once the
>> reweight of newly-added OSD on the new node exceed 0.4 somewhere, the osd
>> becomes unresponsive and restarting, eventually go down.
>> >
>> > What could be the problem?  Any suggestion would be highly appreciated.
>> >
>> > best regards,
>> >
>> > samuel
>> >
>> > ****************************************************
>> > root@node81:/var/log/ceph#
>> > root@node81:/var/log/ceph#
>> > root@node81:/var/log/ceph#
>> > root@node81:/var/log/ceph# ceph osd df
>> > ID CLASS  WEIGHT  REWEIGHT SIZE    USE     AVAIL   %USE VAR  PGS
>> > 12 hybrid 1.00000  1.00000 3.81TiB 38.3GiB 3.77TiB 0.98 1.32 316
>> > 13 hybrid 1.00000  1.00000 3.81TiB 37.6GiB 3.77TiB 0.96 1.29 308
>> > 14 hybrid 1.00000  1.00000 3.81TiB 36.9GiB 3.77TiB 0.95 1.27 301
>> > 15 hybrid 1.00000  1.00000 3.81TiB 37.1GiB 3.77TiB 0.95 1.28 297
>> >   0 hybrid 1.00000  1.00000 3.81TiB 37.6GiB 3.77TiB 0.96 1.29 305
>> >   1 hybrid 1.00000  1.00000 3.81TiB 38.2GiB 3.77TiB 0.98 1.31 309
>> >   2 hybrid 1.00000  1.00000 3.81TiB 37.4GiB 3.77TiB 0.96 1.29 296
>> >   3 hybrid 1.00000  1.00000 3.81TiB 37.9GiB 3.77TiB 0.97 1.30 303
>> >   4    hdd 0.20000  1.00000 3.42TiB 10.5GiB 3.41TiB 0.30 0.40   0
>> >   5    hdd 0.20000  1.00000 3.42TiB 9.63GiB 3.41TiB 0.28 0.37  87
>> >   6    hdd 0.20000  1.00000 3.42TiB 1.91GiB 3.42TiB 0.05 0.07   0
>> >   7    hdd 0.20000  1.00000 3.42TiB 11.3GiB 3.41TiB 0.32 0.43  83
>> > 16    hdd 0.39999  1.00000 1.79TiB 16.3GiB 1.78TiB 0.89 1.19 142
>> >                       TOTAL 45.9TiB  351GiB 45.6TiB 0.75
>> >
>> >
>> ------------------------------------------------------------------------------------
>> 日志
>> >
>> > root@node81:/var/log/ceph# cat ceph-osd.6.log | grep load_pgs
>> > 2020-03-18 18:33:57.808747 2000b556000  0 osd.6 0 load_pgs
>> > 2020-03-18 18:33:57.808763 2000b556000  0 osd.6 0 load_pgs opened 0 pgs
>> >   -1324> 2020-03-18 18:33:57.808747 2000b556000  0 osd.6 0 load_pgs
>> >   -1323> 2020-03-18 18:33:57.808763 2000b556000  0 osd.6 0 load_pgs
>> opened 0 pgs
>> > 2020-03-18 18:35:04.363341 20003270000  0 osd.6 5222 load_pgs
>> > 2020-03-18 18:36:15.318489 20003270000  0 osd.6 5222 load_pgs opened
>> 202 pgs
>> >    -466> 2020-03-18 18:35:04.363341 20003270000  0 osd.6 5222 load_pgs
>> >    -465> 2020-03-18 18:36:15.318489 20003270000  0 osd.6 5222 load_pgs
>> opened 202 pgs
>> > 2020-03-18 18:36:32.367450 2000326e000  0 osd.6 5236 load_pgs
>> > 2020-03-18 18:37:40.747347 2000326e000  0 osd.6 5236 load_pgs opened
>> 177 pgs
>> >    -422> 2020-03-18 18:36:32.367450 2000326e000  0 osd.6 5236 load_pgs
>> >    -421> 2020-03-18 18:37:40.747347 2000326e000  0 osd.6 5236 load_pgs
>> opened 177 pgs
>> > 2020-03-18 18:37:56.579371 2000f374000  0 osd.6 5247 load_pgs
>> > 2020-03-18 18:39:03.376838 2000f374000  0 osd.6 5247 load_pgs opened
>> 170 pgs
>> >     -67> 2020-03-18 18:37:56.579371 2000f374000  0 osd.6 5247 load_pgs
>> >     -66> 2020-03-18 18:39:03.376838 2000f374000  0 osd.6 5247 load_pgs
>> opened 170 pgs
>> >
>> >
>> > 2020-03-18 18:39:09.483868 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b47f2043:::rbd_data.8a738625558ec.00000000000056a3:head have 3291'557
>> flags = none tried to add 3291'557 flags = none
>> > 2020-03-18 18:39:09.483882 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b47f2a18:::rbd_data.9177446e87ccd.00000000000010f8:head have 4738'731
>> flags = none tried to add 4738'731 flags = none
>> > 2020-03-18 18:39:09.483896 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b47fc7a4:::rbd_data.58f426b8b4567.0000000000000221:head have 1789'169
>> flags = delete tried to add 1789'169 flags = delete
>> > 2020-03-18 18:39:20.985370 2000fc61b40  0 --
>> 192.168.230.122:6806/1159687 >> 192.168.230.11:0/3129700933
>> conn(0x200140cb3f0 :6806 s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0
>> l=1).handle_connect_msg: challenging authorizer
>> > 2020-03-18 18:39:21.495101 2000ec1fb40  0 --
>> 192.168.230.122:6806/1159687 >> 192.168.230.12:0/4111063261
>> conn(0x200140c55a0 :6806 s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0
>> l=1).handle_connect_msg: challenging authorizer
>> > 2020-03-18 18:39:21.495101 2000fc61b40  0 --
>> 192.168.230.122:6806/1159687 >> 192.168.230.13:0/464497787
>> conn(0x200140fd4b0 :6806 s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0
>> l=1).handle_connect_msg: challenging authorizer
>> > 2020-03-18 18:39:21.629021 2000ec1fb40  0 --
>> 192.168.230.122:6806/1159687 >> 192.168.230.201:0/4088469422
>> conn(0x20014100b10 :6806 s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0
>> l=1).handle_connect_msg: challenging authorizer
>> > 2020-03-18 18:39:26.426584 201e35fdb40 -1 *** Caught signal (Bus error)
>> **
>> >   in thread 201e35fdb40 thread_name:tp_osd_tp
>> >
>> >   ceph version 12.2.7 (3ec878d1e53e1aeb47a9f619c49d9e7c0aa384d5)
>> luminous (stable)
>> >   1: (()+0x145882c) [0x2000245882c]
>> >   2: (()+0x19890) [0x2000c54b890]
>> >   3: (BlueStore::ExtentMap::reshard(KeyValueDB*,
>> std::shared_ptr<KeyValueDB::TransactionImpl>)+0x2df0) [0x2000229da60]
>> >   4: (BlueStore::_txc_write_nodes(BlueStore::TransContext*,
>> std::shared_ptr<KeyValueDB::TransactionImpl>)+0x218) [0x2000229f888]
>> >   5: (BlueStore::queue_transactions(ObjectStore::Sequencer*,
>> std::vector<ObjectStore::Transaction,
>> std::allocator<ObjectStore::Transaction> >&,
>> boost::intrusive_ptr<TrackedOp>, ThreadPool::TPHandle*)+0x71c)
>> [0x200022c7a6c]
>> >   6: (ObjectStore::queue_transaction(ObjectStore::Sequencer*,
>> ObjectStore::Transaction&&, Context*, Context*, Context*,
>> boost::intrusive_ptr<TrackedOp>, ThreadPool::TPHandle*)+0x240)
>> [0x20001c19ee0]
>> >   7: (PrimaryLogPG::queue_transaction(ObjectStore::Transaction&&,
>> boost::intrusive_ptr<OpRequest>)+0x90) [0x20001e871b0]
>> >   8:
>> (ReplicatedBackend::_do_push(boost::intrusive_ptr<OpRequest>)+0x730)
>> [0x2000202e970]
>> >   9:
>> (ReplicatedBackend::_handle_message(boost::intrusive_ptr<OpRequest>)+0x59c)
>> [0x200020442bc]
>> >   10: (PGBackend::handle_message(boost::intrusive_ptr<OpRequest>)+0x94)
>> [0x20001ecea74]
>> >   11: (PrimaryLogPG::do_request(boost::intrusive_ptr<OpRequest>&,
>> ThreadPool::TPHandle&)+0x814) [0x20001de1384]
>> >   12: (OSD::dequeue_op(boost::intrusive_ptr<PG>,
>> boost::intrusive_ptr<OpRequest>, ThreadPool::TPHandle&)+0x614)
>> [0x20001b817d4]
>> >   13: (PGQueueable::RunVis::operator()(boost::intrusive_ptr<OpRequest>
>> const&)+0xb8) [0x20001f98968]
>> >   14: (OSD::ShardedOpWQ::_process(unsigned int,
>> ceph::heartbeat_handle_d*)+0x1c24) [0x20001bb5fd4]
>> >   15: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0xab4)
>> [0x200024d60a4]
>> >   16: (ShardedThreadPool::WorkThreadSharded::entry()+0x28)
>> [0x200024da278]
>> >   17: (Thread::entry_wrapper()+0xec) [0x20002769b4c]
>> >   18: (Thread::_entry_func(void*)+0x20) [0x20002769ba0]
>> >   19: (()+0x80fc) [0x2000c53a0fc]
>> >   20: (()+0x119854) [0x2000f2ad854]
>> >   NOTE: a copy of the executable, or `objdump -rdS <executable>` is
>> needed to interpret this.
>> >
>> > --- begin dump of recent events ---
>> >    -147> 2020-03-18 18:37:51.039443 2000f374000  5 asok(0x2000cd7f230)
>> register_command perfcounters_dump hook 0x2000ce09e40
>> >    -146> 2020-03-18 18:37:51.039716 2000f374000  5 asok(0x2000cd7f230)
>> register_command 1 hook 0x2000ce09e40
>> >    -145> 2020-03-18 18:37:51.039736 2000f374000  5 asok(0x2000cd7f230)
>> register_command perf dump hook 0x2000ce09e40
>> >    -144> 2020-03-18 18:37:51.039769 2000f374000  5 asok(0x2000cd7f230)
>> register_command perfcounters_schema hook 0x2000ce09e40
>> >    -143> 2020-03-18 18:37:51.039789 2000f374000  5 asok(0x2000cd7f230)
>> register_command perf histogram dump hook 0x2000ce09e40
>> >    -142> 2020-03-18 18:37:51.039807 2000f374000  5 asok(0x2000cd7f230)
>> register_command 2 hook 0x2000ce09e40
>> >    -141> 2020-03-18 18:37:51.039823 2000f374000  5 asok(0x2000cd7f230)
>> register_command perf schema hook 0x2000ce09e40
>> >    -140> 2020-03-18 18:37:51.039843 2000f374000  5 asok(0x2000cd7f230)
>> register_command perf histogram schema hook 0x2000ce09e40
>> >    -139> 2020-03-18 18:37:51.039863 2000f374000  5 asok(0x2000cd7f230)
>> register_command perf reset hook 0x2000ce09e40
>> >    -138> 2020-03-18 18:37:51.039881 2000f374000  5 asok(0x2000cd7f230)
>> register_command config show hook 0x2000ce09e40
>> >    -137> 2020-03-18 18:37:51.039899 2000f374000  5 asok(0x2000cd7f230)
>> register_command config help hook 0x2000ce09e40
>> >    -136> 2020-03-18 18:37:51.039928 2000f374000  5 asok(0x2000cd7f230)
>> register_command config set hook 0x2000ce09e40
>> >    -135> 2020-03-18 18:37:51.039949 2000f374000  5 asok(0x2000cd7f230)
>> register_command config get hook 0x2000ce09e40
>> >    -134> 2020-03-18 18:37:51.039967 2000f374000  5 asok(0x2000cd7f230)
>> register_command config diff hook 0x2000ce09e40
>> >    -133> 2020-03-18 18:37:51.039985 2000f374000  5 asok(0x2000cd7f230)
>> register_command config diff get hook 0x2000ce09e40
>> >    -132> 2020-03-18 18:37:51.040005 2000f374000  5 asok(0x2000cd7f230)
>> register_command log flush hook 0x2000ce09e40
>> >    -131> 2020-03-18 18:37:51.040021 2000f374000  5 asok(0x2000cd7f230)
>> register_command log dump hook 0x2000ce09e40
>> >    -130> 2020-03-18 18:37:51.040038 2000f374000  5 asok(0x2000cd7f230)
>> register_command log reopen hook 0x2000ce09e40
>> >    -129> 2020-03-18 18:37:51.040189 2000f374000  5 asok(0x2000cd7f230)
>> register_command dump_mempools hook 0x2000ce0d038
>> >    -128> 2020-03-18 18:37:51.099580 2000f374000 -1 WARNING: the
>> following dangerous and experimental features are enabled: bluestore,rocksdb
>> >    -127> 2020-03-18 18:37:51.102046 2000f374000 -1 WARNING: the
>> following dangerous and experimental features are enabled: bluestore,rocksdb
>> >    -126> 2020-03-18 18:37:51.102148 2000f374000  0 ceph version 12.2.7
>> (3ec878d1e53e1aeb47a9f619c49d9e7c0aa384d5) luminous (stable), process
>> ceph-osd, pid 1159687
>> >    -125> 2020-03-18 18:37:51.109860 2000f374000  0 pidfile_write:
>> ignore empty --pid-file
>> >    -124> 2020-03-18 18:37:51.130256 2000f374000 -1 WARNING: the
>> following dangerous and experimental features are enabled: bluestore,rocksdb
>> >    -123> 2020-03-18 18:37:51.183798 2000f374000  0 load: jerasure load:
>> lrc
>> >    -122> 2020-03-18 18:37:51.184330 2000f374000  1 bdev create path
>> /var/lib/ceph/osd/ceph-6/block type kernel
>> >    -121> 2020-03-18 18:37:51.184492 2000f374000  1 bdev(0x2000cf49800
>> /var/lib/ceph/osd/ceph-6/block) open path /var/lib/ceph/osd/ceph-6/block
>> >    -120> 2020-03-18 18:37:51.184741 2000f374000  1 bdev(0x2000cf49800
>> /var/lib/ceph/osd/ceph-6/block) open backing device/file reports st_blksize
>> 8192, using bdev_block_size 4096 anyway
>> >    -119> 2020-03-18 18:37:51.185841 2000f374000  1 bdev(0x2000cf49800
>> /var/lib/ceph/osd/ceph-6/block) open size 3758096384000 (0x36b00000000,
>> 3500 GB) block_size 4096 (4096 B) rotational
>> >    -118> 2020-03-18 18:37:51.186232 2000f374000  1 bdev(0x2000cf49800
>> /var/lib/ceph/osd/ceph-6/block) close
>> >    -117> 2020-03-18 18:37:51.498718 2000f374000  1 bdev create path
>> /var/lib/ceph/osd/ceph-6/block type kernel
>> >    -116> 2020-03-18 18:37:51.498748 2000f374000  1 bdev(0x2000cf5d560
>> /var/lib/ceph/osd/ceph-6/block) open path /var/lib/ceph/osd/ceph-6/block
>> >    -115> 2020-03-18 18:37:51.498811 2000f374000  1 bdev(0x2000cf5d560
>> /var/lib/ceph/osd/ceph-6/block) open backing device/file reports st_blksize
>> 8192, using bdev_block_size 4096 anyway
>> >    -114> 2020-03-18 18:37:51.499234 2000f374000  1 bdev(0x2000cf5d560
>> /var/lib/ceph/osd/ceph-6/block) open size 3758096384000 (0x36b00000000,
>> 3500 GB) block_size 4096 (4096 B) rotational
>> >    -113> 2020-03-18 18:37:51.500074 2000f374000  1 bdev create path
>> /var/lib/ceph/osd/ceph-6/block.db type kernel
>> >    -112> 2020-03-18 18:37:51.500096 2000f374000  1 bdev(0x2000cf5e6a0
>> /var/lib/ceph/osd/ceph-6/block.db) open path
>> /var/lib/ceph/osd/ceph-6/block.db
>> >    -111> 2020-03-18 18:37:51.500170 2000f374000  1 bdev(0x2000cf5e6a0
>> /var/lib/ceph/osd/ceph-6/block.db) open backing device/file reports
>> st_blksize 8192, using bdev_block_size 4096 anyway
>> >    -110> 2020-03-18 18:37:51.500815 2000f374000  1 bdev(0x2000cf5e6a0
>> /var/lib/ceph/osd/ceph-6/block.db) open size 39998980096 (0x950200000,
>> 38146 MB) block_size 4096 (4096 B) rotational
>> >    -109> 2020-03-18 18:37:51.502625 2000f374000  1 bdev create path
>> /var/lib/ceph/osd/ceph-6/block type kernel
>> >    -108> 2020-03-18 18:37:51.502651 2000f374000  1 bdev(0x2000cf5ed80
>> /var/lib/ceph/osd/ceph-6/block) open path /var/lib/ceph/osd/ceph-6/block
>> >    -107> 2020-03-18 18:37:51.502718 2000f374000  1 bdev(0x2000cf5ed80
>> /var/lib/ceph/osd/ceph-6/block) open backing device/file reports st_blksize
>> 8192, using bdev_block_size 4096 anyway
>> >    -106> 2020-03-18 18:37:51.503137 2000f374000  1 bdev(0x2000cf5ed80
>> /var/lib/ceph/osd/ceph-6/block) open size 3758096384000 (0x36b00000000,
>> 3500 GB) block_size 4096 (4096 B) rotational
>> >    -105> 2020-03-18 18:37:51.549269 2000f374000  0  set rocksdb option
>> compaction_readahead_size = 2MB
>> >    -104> 2020-03-18 18:37:51.549349 2000f374000  0  set rocksdb option
>> compaction_style = kCompactionStyleLevel
>> >    -103> 2020-03-18 18:37:51.552610 2000f374000  0  set rocksdb option
>> compaction_threads = 32
>> >    -102> 2020-03-18 18:37:51.552652 2000f374000  0  set rocksdb option
>> compression = kNoCompression
>> >    -101> 2020-03-18 18:37:51.553442 2000f374000  0  set rocksdb option
>> flusher_threads = 8
>> >    -100> 2020-03-18 18:37:51.553508 2000f374000  0  set rocksdb option
>> level0_file_num_compaction_trigger = 64
>> >     -99> 2020-03-18 18:37:51.553536 2000f374000  0  set rocksdb option
>> level0_slowdown_writes_trigger = 128
>> >     -98> 2020-03-18 18:37:51.553559 2000f374000  0  set rocksdb option
>> level0_stop_writes_trigger = 256
>> >     -97> 2020-03-18 18:37:51.553579 2000f374000  0  set rocksdb option
>> max_background_compactions = 64
>> >     -96> 2020-03-18 18:37:51.553601 2000f374000  0  set rocksdb option
>> max_bytes_for_level_base = 2GB
>> >     -95> 2020-03-18 18:37:51.553624 2000f374000  0  set rocksdb option
>> max_write_buffer_number = 64
>> >     -94> 2020-03-18 18:37:51.553646 2000f374000  0  set rocksdb option
>> min_write_buffer_number_to_merge = 32
>> >     -93> 2020-03-18 18:37:51.553665 2000f374000  0  set rocksdb option
>> recycle_log_file_num = 64
>> >     -92> 2020-03-18 18:37:51.553687 2000f374000  0  set rocksdb option
>> target_file_size_base = 4MB
>> >     -91> 2020-03-18 18:37:51.553708 2000f374000  0  set rocksdb option
>> write_buffer_size = 4MB
>> >     -90> 2020-03-18 18:37:51.553892 2000f374000  0  set rocksdb option
>> compaction_readahead_size = 2MB
>> >     -89> 2020-03-18 18:37:51.553923 2000f374000  0  set rocksdb option
>> compaction_style = kCompactionStyleLevel
>> >     -88> 2020-03-18 18:37:51.553948 2000f374000  0  set rocksdb option
>> compaction_threads = 32
>> >     -87> 2020-03-18 18:37:51.553973 2000f374000  0  set rocksdb option
>> compression = kNoCompression
>> >     -86> 2020-03-18 18:37:51.553994 2000f374000  0  set rocksdb option
>> flusher_threads = 8
>> >     -85> 2020-03-18 18:37:51.554016 2000f374000  0  set rocksdb option
>> level0_file_num_compaction_trigger = 64
>> >     -84> 2020-03-18 18:37:51.554043 2000f374000  0  set rocksdb option
>> level0_slowdown_writes_trigger = 128
>> >     -83> 2020-03-18 18:37:51.554065 2000f374000  0  set rocksdb option
>> level0_stop_writes_trigger = 256
>> >     -82> 2020-03-18 18:37:51.554084 2000f374000  0  set rocksdb option
>> max_background_compactions = 64
>> >     -81> 2020-03-18 18:37:51.554106 2000f374000  0  set rocksdb option
>> max_bytes_for_level_base = 2GB
>> >     -80> 2020-03-18 18:37:51.554133 2000f374000  0  set rocksdb option
>> max_write_buffer_number = 64
>> >     -79> 2020-03-18 18:37:51.554154 2000f374000  0  set rocksdb option
>> min_write_buffer_number_to_merge = 32
>> >     -78> 2020-03-18 18:37:51.554174 2000f374000  0  set rocksdb option
>> recycle_log_file_num = 64
>> >     -77> 2020-03-18 18:37:51.554196 2000f374000  0  set rocksdb option
>> target_file_size_base = 4MB
>> >     -76> 2020-03-18 18:37:51.554232 2000f374000  0  set rocksdb option
>> write_buffer_size = 4MB
>> >     -75> 2020-03-18 18:37:56.382110 2000f374000  0 <cls>
>> /home/deepin/hhao/srccode/ceph-12.2.7/src/cls/hello/cls_hello.cc:296:
>> loading cls_hello
>> >     -74> 2020-03-18 18:37:56.383845 2000f374000  0 _get_class not
>> permitted to load lua
>> >     -73> 2020-03-18 18:37:56.386594 2000f374000  0 _get_class not
>> permitted to load sdk
>> >     -72> 2020-03-18 18:37:56.395800 2000f374000  0 _get_class not
>> permitted to load kvs
>> >     -71> 2020-03-18 18:37:56.398226 2000f374000  0 <cls>
>> /home/deepin/hhao/srccode/ceph-12.2.7/src/cls/cephfs/cls_cephfs.cc:197:
>> loading cephfs
>> >     -70> 2020-03-18 18:37:56.433293 2000f374000  0 osd.6 5247 crush map
>> has features 432629239337189376, adjusting msgr requires for clients
>> >     -69> 2020-03-18 18:37:56.433330 2000f374000  0 osd.6 5247 crush map
>> has features 432629239337189376 was 8705, adjusting msgr requires for mons
>> >     -68> 2020-03-18 18:37:56.433357 2000f374000  0 osd.6 5247 crush map
>> has features 1009089991640629248, adjusting msgr requires for osds
>> >     -67> 2020-03-18 18:37:56.579371 2000f374000  0 osd.6 5247 load_pgs
>> >     -66> 2020-03-18 18:39:03.376838 2000f374000  0 osd.6 5247 load_pgs
>> opened 170 pgs
>> >     -65> 2020-03-18 18:39:03.377040 2000f374000  0 osd.6 5247 using
>> weightedpriority op queue with priority op cut off at 196.
>> >     -64> 2020-03-18 18:39:03.413901 2000f374000 -1 osd.6 5247
>> log_to_monitors {default=true}
>> >     -63> 2020-03-18 18:39:03.663128 2000f374000  0 osd.6 5247 done with
>> init, starting boot process
>> >     -62> 2020-03-18 18:39:03.663856 201d65fdb40  4 mgrc handle_mgr_map
>> Got map version 34
>> >     -61> 2020-03-18 18:39:03.663947 201d65fdb40  4 mgrc handle_mgr_map
>> Active mgr is now 192.168.230.120:6808/44007
>> >     -60> 2020-03-18 18:39:03.663972 201d65fdb40  4 mgrc reconnect
>> Starting new session with 192.168.230.120:6808/44007
>> >     -59> 2020-03-18 18:39:03.667814 201d65fdb40  4 mgrc
>> handle_mgr_configure stats_period=5
>> >     -58> 2020-03-18 18:39:03.667831 201d65fdb40  4 mgrc
>> handle_mgr_configure updated stats threshold: 5
>> >     -57> 2020-03-18 18:39:03.752253 2000dae5b40  0 --
>> 192.168.240.122:6806/1159687 >> 192.168.240.121:6802/33592
>> conn(0x20014018040 :6806 s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0
>> l=0).handle_connect_msg: challenging authorizer
>> >     -56> 2020-03-18 18:39:03.754040 2000fc61b40  0 --
>> 192.168.240.122:6806/1159687 >> 192.168.240.121:6800/33864
>> conn(0x2001403ee60 :6806 s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0
>> l=0).handle_connect_msg: challenging authorizer
>> >     -55> 2020-03-18 18:39:03.762441 2000ec1fb40  0 --
>> 192.168.240.122:6806/1159687 >> 192.168.240.120:6804/13410
>> conn(0x20014079940 :6806 s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0
>> l=0).handle_connect_msg: challenging authorizer
>> >     -54> 2020-03-18 18:39:03.762860 2000ec1fb40  0 --
>> 192.168.240.122:6806/1159687 >> 192.168.240.120:6806/13743
>> conn(0x20014083980 :6806 s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0
>> l=0).handle_connect_msg: challenging authorizer
>> >     -53> 2020-03-18 18:39:03.765775 2000fc61b40  0 --
>> 192.168.240.122:6806/1159687 >> 192.168.240.121:6806/12423
>> conn(0x2001407e030 :6806 s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0
>> l=0).handle_connect_msg: challenging authorizer
>> >     -52> 2020-03-18 18:39:03.767284 2000ec1fb40  0 --
>> 192.168.240.122:6806/1159687 >> 192.168.240.121:6804/11599
>> conn(0x2001408c660 :6806 s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0
>> l=0).handle_connect_msg: challenging authorizer
>> >     -51> 2020-03-18 18:39:03.769434 2000dae5b40  0 --
>> 192.168.240.122:6806/1159687 >> 192.168.240.120:6800/12205
>> conn(0x20014087ff0 :6806 s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0
>> l=0).handle_connect_msg: challenging authorizer
>> >     -50> 2020-03-18 18:39:03.775190 2000ec1fb40  0 --
>> 192.168.240.122:6806/1159687 >> 192.168.240.120:6802/12991
>> conn(0x2001409d300 :6806 s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0
>> l=0).handle_connect_msg: challenging authorizer
>> >     -49> 2020-03-18 18:39:04.009755 2000dae5b40  0 --
>> 192.168.240.122:6806/1159687 >> 192.168.240.122:6804/1159466
>> conn(0x200140b3420 :6806 s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0
>> l=0).handle_connect_msg: challenging authorizer
>> >     -48> 2020-03-18 18:39:05.177544 2000ec1fb40  0 --
>> 192.168.230.122:6806/1159687 >> 192.168.230.202:0/3091162658
>> conn(0x200140cd900 :6806 s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0
>> l=1).handle_connect_msg: challenging authorizer
>> >     -47> 2020-03-18 18:39:05.402465 2000dae5b40  0 --
>> 192.168.230.122:6806/1159687 >> 192.168.230.201:0/4289863819
>> conn(0x200140d8500 :6806 s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0
>> l=1).handle_connect_msg: challenging authorizer
>> >     -46> 2020-03-18 18:39:09.483237 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b442b93f:::rbd_data.911772ae8944a.0000000000002aa7:head have 3097'452
>> flags = none tried to add 3097'452 flags = none
>> >     -45> 2020-03-18 18:39:09.483318 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b445877b:::rbd_data.2fa7e6b8b4567.000000000000002d:head have 1915'212
>> flags = none tried to add 1915'212 flags = none
>> >     -44> 2020-03-18 18:39:09.483336 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b44674a4:::rbd_data.110706b8b4567.0000000000000659:head have 1915'213
>> flags = none tried to add 1915'213 flags = none
>> >     -43> 2020-03-18 18:39:09.483351 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b44757d1:::rbd_data.cd282238e1f29.0000000000009ea2:head have 5165'734
>> flags = none tried to add 5165'734 flags = none
>> >     -42> 2020-03-18 18:39:09.483366 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b4490029:::rbd_data.8d8146b8b4567.00000000000080a0:head have 2855'272
>> flags = none tried to add 2855'272 flags = none
>> >     -41> 2020-03-18 18:39:09.483381 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b44950ae:::rbd_data.5f15f625558ec.0000000000009fd2:head have 1915'214
>> flags = none tried to add 1915'214 flags = none
>> >     -40> 2020-03-18 18:39:09.483395 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b449ce0d:::rbd_data.9117a327b23c6.00000000000074a6:head have 3798'560
>> flags = none tried to add 3798'560 flags = none
>> >     -39> 2020-03-18 18:39:09.483409 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b44a69ad:::rbd_data.8b4a76b8b4567.000000000000017a:head have 2197'242
>> flags = none tried to add 2197'242 flags = none
>> >     -38> 2020-03-18 18:39:09.483423 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b44bbb34:::rbd_data.8922074b0dc51.00000000000098a5:head have 3099'543
>> flags = delete tried to add 3099'543 flags = delete
>> >     -37> 2020-03-18 18:39:09.483438 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b44be196:::rbd_data.cd8e219495cff.00000000000192a0:head have 5169'1101
>> flags = delete tried to add 5169'1101 flags = delete
>> >     -36> 2020-03-18 18:39:09.483454 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b44cd1c7:::rbd_data.5f15f625558ec.000000000000820b:head have 1915'215
>> flags = none tried to add 1915'215 flags = none
>> >     -35> 2020-03-18 18:39:09.483469 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b44e04c0:::rbd_data.cd66c46e87ccd.0000000000015aa4:head have 5175'1280
>> flags = none tried to add 5175'1280 flags = none
>> >     -34> 2020-03-18 18:39:09.483483 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b44fa767:::rbd_data.ccf5e625558ec.00000000000150a0:head have 5166'844
>> flags = delete tried to add 5166'844 flags = delete
>> >     -33> 2020-03-18 18:39:09.483497 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b4518cb0:::rbd_data.589572ae8944a.0000000000000433:head have 1805'174
>> flags = delete tried to add 1805'174 flags = delete
>> >     -32> 2020-03-18 18:39:09.483511 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b4539e4a:::rbd_data.5f15f625558ec.000000000000031a:head have 1915'216
>> flags = none tried to add 1915'216 flags = none
>> >     -31> 2020-03-18 18:39:09.483525 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b4540dfd:::rbd_data.cd66c46e87ccd.000000000001d6a6:head have 5175'1283
>> flags = none tried to add 5175'1283 flags = none
>> >     -30> 2020-03-18 18:39:09.483539 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b45474ee:::rbd_data.2fa7e6b8b4567.000000000000003a:head have 1915'217
>> flags = none tried to add 1915'217 flags = none
>> >     -29> 2020-03-18 18:39:09.483553 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b45506f1:::rbd_data.9177446e87ccd.00000000000036c3:head have 5175'1284
>> flags = none tried to add 5175'1284 flags = none
>> >     -28> 2020-03-18 18:39:09.483567 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b4576e29:::rbd_data.589572ae8944a.0000000000001207:head have 1805'179
>> flags = delete tried to add 1805'179 flags = delete
>> >     -27> 2020-03-18 18:39:09.483582 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b4578057:::rbd_data.ccf5e625558ec.00000000000096a5:head have 5166'785
>> flags = delete tried to add 5166'785 flags = delete
>> >     -26> 2020-03-18 18:39:09.483595 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b457fe94:::rbd_data.589572ae8944a.0000000000004e27:head have 1805'200
>> flags = delete tried to add 1805'200 flags = delete
>> >     -25> 2020-03-18 18:39:09.483610 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b4592563:::rbd_data.9117a327b23c6.000000000000b2a0:head have 3894'562
>> flags = none tried to add 3894'562 flags = none
>> >     -24> 2020-03-18 18:39:09.483625 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b45bb3ff:::rbd_data.c6c99507ed7ab.000000000000aea0:head have 5172'1141
>> flags = none tried to add 5172'1141 flags = none
>> >     -23> 2020-03-18 18:39:09.483639 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b45d7c66:::rbd_data.110706b8b4567.0000000000000e07:head have 1915'218
>> flags = none tried to add 1915'218 flags = none
>> >     -22> 2020-03-18 18:39:09.483654 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b45e1a62:::rbd_data.589572ae8944a.0000000000004733:head have 1805'198
>> flags = delete tried to add 1805'198 flags = delete
>> >     -21> 2020-03-18 18:39:09.483667 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b45f03a3:::rbd_data.914c22ae8944a.00000000000058a8:head have 3908'593
>> flags = delete tried to add 3908'593 flags = delete
>> >     -20> 2020-03-18 18:39:09.483681 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b4604fdd:::rbd_data.c6c99507ed7ab.00000000000004ca:head have 5176'1523
>> flags = none tried to add 5176'1523 flags = none
>> >     -19> 2020-03-18 18:39:09.483695 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b4636d5b:::rbd_data.cd282238e1f29.000000000000b6a1:head have 5165'735
>> flags = none tried to add 5165'735 flags = none
>> >     -18> 2020-03-18 18:39:09.483710 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b463cd0a:::rbd_data.9117a327b23c6.000000000000c0a8:head have 3908'563
>> flags = none tried to add 3908'563 flags = none
>> >     -17> 2020-03-18 18:39:09.483724 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b468ca37:::rbd_data.9177446e87ccd.000000000000040a:head have 4427'730
>> flags = none tried to add 4427'730 flags = none
>> >     -16> 2020-03-18 18:39:09.483738 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b4696438:::rbd_data.9177446e87ccd.0000000000000c53:head have 4743'732
>> flags = none tried to add 4743'732 flags = none
>> >     -15> 2020-03-18 18:39:09.483752 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b46be061:::rbd_data.c70aa2eb141f2.0000000000001ea6:head have 5175'1287
>> flags = none tried to add 5175'1287 flags = none
>> >     -14> 2020-03-18 18:39:09.483766 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b46bfc4e:::rbd_data.cda772ae8944a.00000000000020a4:head have 5172'1144
>> flags = none tried to add 5172'1144 flags = none
>> >     -13> 2020-03-18 18:39:09.483782 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b46e7efd:::rbd_data.110706b8b4567.0000000000001c01:head have 1915'219
>> flags = none tried to add 1915'219 flags = none
>> >     -12> 2020-03-18 18:39:09.483796 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b46ece7b:::rbd_data.8ff1766334873.00000000000000ae:head have 3092'344
>> flags = delete tried to add 3092'344 flags = delete
>> >     -11> 2020-03-18 18:39:09.483811 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b4786571:::rbd_data.8f6b3643c9869.00000000000040a0:head have 3092'381
>> flags = delete tried to add 3092'381 flags = delete
>> >     -10> 2020-03-18 18:39:09.483825 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b47888b8:::rbd_data.589572ae8944a.0000000000004534:head have 1805'197
>> flags = delete tried to add 1805'197 flags = delete
>> >      -9> 2020-03-18 18:39:09.483840 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b478b833:::rbd_data.589572ae8944a.00000000000048ff:head have 1805'199
>> flags = delete tried to add 1805'199 flags = delete
>> >      -8> 2020-03-18 18:39:09.483854 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b47ec645:::rbd_data.8b4a76b8b4567.00000000000001d2:head have 2197'271
>> flags = none tried to add 2197'271 flags = none
>> >      -7> 2020-03-18 18:39:09.483868 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b47f2043:::rbd_data.8a738625558ec.00000000000056a3:head have 3291'557
>> flags = none tried to add 3291'557 flags = none
>> >      -6> 2020-03-18 18:39:09.483882 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b47f2a18:::rbd_data.9177446e87ccd.00000000000010f8:head have 4738'731
>> flags = none tried to add 4738'731 flags = none
>> >      -5> 2020-03-18 18:39:09.483896 201df5fdb40  0 0x201c4c90c90 4.22d
>> unexpected need for
>> 4:b47fc7a4:::rbd_data.58f426b8b4567.0000000000000221:head have 1789'169
>> flags = delete tried to add 1789'169 flags = delete
>> >      -4> 2020-03-18 18:39:20.985370 2000fc61b40  0 --
>> 192.168.230.122:6806/1159687 >> 192.168.230.11:0/3129700933
>> conn(0x200140cb3f0 :6806 s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0
>> l=1).handle_connect_msg: challenging authorizer
>> >      -3> 2020-03-18 18:39:21.495101 2000ec1fb40  0 --
>> 192.168.230.122:6806/1159687 >> 192.168.230.12:0/4111063261
>> conn(0x200140c55a0 :6806 s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0
>> l=1).handle_connect_msg: challenging authorizer
>> >      -2> 2020-03-18 18:39:21.495101 2000fc61b40  0 --
>> 192.168.230.122:6806/1159687 >> 192.168.230.13:0/464497787
>> conn(0x200140fd4b0 :6806 s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0
>> l=1).handle_connect_msg: challenging authorizer
>> >      -1> 2020-03-18 18:39:21.629021 2000ec1fb40  0 --
>> 192.168.230.122:6806/1159687 >> 192.168.230.201:0/4088469422
>> conn(0x20014100b10 :6806 s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0
>> l=1).handle_connect_msg: challenging authorizer
>> >       0> 2020-03-18 18:39:26.426584 201e35fdb40 -1 *** Caught signal
>> (Bus error) **
>> >   in thread 201e35fdb40 thread_name:tp_osd_tp
>> >
>> >   ceph version 12.2.7 (3ec878d1e53e1aeb47a9f619c49d9e7c0aa384d5)
>> luminous (stable)
>> >   1: (()+0x145882c) [0x2000245882c]
>> >   2: (()+0x19890) [0x2000c54b890]
>> >   3: (BlueStore::ExtentMap::reshard(KeyValueDB*,
>> std::shared_ptr<KeyValueDB::TransactionImpl>)+0x2df0) [0x2000229da60]
>> >   4: (BlueStore::_txc_write_nodes(BlueStore::TransContext*,
>> std::shared_ptr<KeyValueDB::TransactionImpl>)+0x218) [0x2000229f888]
>> >   5: (BlueStore::queue_transactions(ObjectStore::Sequencer*,
>> std::vector<ObjectStore::Transaction,
>> std::allocator<ObjectStore::Transaction> >&,
>> boost::intrusive_ptr<TrackedOp>, ThreadPool::TPHandle*)+0x71c)
>> [0x200022c7a6c]
>> >   6: (ObjectStore::queue_transaction(ObjectStore::Sequencer*,
>> ObjectStore::Transaction&&, Context*, Context*, Context*,
>> boost::intrusive_ptr<TrackedOp>, ThreadPool::TPHandle*)+0x240)
>> [0x20001c19ee0]
>> >   7: (PrimaryLogPG::queue_transaction(ObjectStore::Transaction&&,
>> boost::intrusive_ptr<OpRequest>)+0x90) [0x20001e871b0]
>> >   8:
>> (ReplicatedBackend::_do_push(boost::intrusive_ptr<OpRequest>)+0x730)
>> [0x2000202e970]
>> >   9:
>> (ReplicatedBackend::_handle_message(boost::intrusive_ptr<OpRequest>)+0x59c)
>> [0x200020442bc]
>> >   10: (PGBackend::handle_message(boost::intrusive_ptr<OpRequest>)+0x94)
>> [0x20001ecea74]
>> >   11: (PrimaryLogPG::do_request(boost::intrusive_ptr<OpRequest>&,
>> ThreadPool::TPHandle&)+0x814) [0x20001de1384]
>> >   12: (OSD::dequeue_op(boost::intrusive_ptr<PG>,
>> boost::intrusive_ptr<OpRequest>, ThreadPool::TPHandle&)+0x614)
>> [0x20001b817d4]
>> >   13: (PGQueueable::RunVis::operator()(boost::intrusive_ptr<OpRequest>
>> const&)+0xb8) [0x20001f98968]
>> >   14: (OSD::ShardedOpWQ::_process(unsigned int,
>> ceph::heartbeat_handle_d*)+0x1c24) [0x20001bb5fd4]
>> >   15: (ShardedThreadPool::shardedthreadpool_worker(unsigned int)+0xab4)
>> [0x200024d60a4]
>> >   16: (ShardedThreadPool::WorkThreadSharded::entry()+0x28)
>> [0x200024da278]
>> >   17: (Thread::entry_wrapper()+0xec) [0x20002769b4c]
>> >   18: (Thread::_entry_func(void*)+0x20) [0x20002769ba0]
>> >   19: (()+0x80fc) [0x2000c53a0fc]
>> >   20: (()+0x119854) [0x2000f2ad854]
>> >   NOTE: a copy of the executable, or `objdump -rdS <executable>` is
>> needed to interpret this.
>> >
>> >
>> >
>> > huxiaoyu@xxxxxxxxxxxx
>> > _______________________________________________
>> > ceph-users mailing list -- ceph-users@xxxxxxx
>> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux