osd down when run fio randwrite 4k using bluestore

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
  I setup a Ceph cluster of Luminous, it’s osd use bluestore and I create a cephfs, it’s metadata using replicated and data pool using erasure.
   pool 1 'metadata' replicated size 2 min_size 1 crush_ruleset 1 object_hash rjenkins pg_num 1024 pgp_num 1024 last_change 52 flags hashpspool stripe_width 0
   pool 2 'EC_2_1_8' erasure size 3 min_size 2 crush_ruleset 2 object_hash rjenkins pg_num 1024 pgp_num 1024 last_change 56 flags hashpspool,ec_overwrites stripe_width 8192 expected_num_objects 27000000
  When run fio to test the cluster, which command is
   fio --numjobs=16 --iodepth=16 --ioengine=libaio --runtime=600 --direct=1 --group_reporting --rw=randwrite --bs=4k --name=aa --filename=/ec/1.txt --size=500G

  for a while time, some osds is reported down.
  The bluestore log is

2017-07-08 11:29:33.948698 7fb8e7682700 20 bluefs _flush_and_sync_log cleaned file file(ino 30 size 0x2f44a1 mtime 2017-07-08 11:29:33.925066 bdev 1 extents [1:0x2300000+100000,1:0x2900000+200000])
2017-07-08 11:29:33.948722 7fb8e7682700 20 bluestore(/var/lib/ceph/osd/ceph-3) _kv_sync_thread committed 1 cleaned 1 in 0.023813 (0.000012 flush + 0.023800 kv commit)
2017-07-08 11:29:33.948726 7fb8e7682700 10 bluestore(/var/lib/ceph/osd/ceph-3) _txc_state_proc txc 0x56545091e680 kv_submitted
2017-07-08 11:29:33.948728 7fb8e7682700 20 bluestore(/var/lib/ceph/osd/ceph-3) _txc_committed_kv txc 0x56545091e680
2017-07-08 11:29:33.948741 7fb8e7682700 20 bluestore(/var/lib/ceph/osd/ceph-3) _deferred_queue txc 0x56545091e680 osr 0x56544e271800
2017-07-08 11:29:33.948758 7fb8e7682700 20 bluestore.DeferredBatch(0x56544e2fc880) prepare_write seq 155 0xdc5f0000~4000 crc 9d8983a0
2017-07-08 11:29:33.948784 7fb8e7682700 10 bluestore(/var/lib/ceph/osd/ceph-3) _txc_state_proc txc 0x56545091eec0 deferred_cleanup
2017-07-08 11:29:33.948787 7fb8e7682700 20 bluestore(/var/lib/ceph/osd/ceph-3) _txc_finish 0x56545091eec0 onodes 0x5654516618c0
2017-07-08 11:29:33.948790 7fb8e7682700 20 bluestore.BufferSpace(0x565450608698 in 0x56544dc495e0) finish_write discard buffer(0x56545177e010 space 0x565450608698 0x0~80000 writing nocache)
2017-07-08 11:29:33.948804 7fb8e7682700 20 bluestore.BufferSpace(0x565450608858 in 0x56544dc495e0) finish_write discard buffer(0x56545177def0 space 0x565450608858 0x0~30000 writing nocache)
2017-07-08 11:29:33.948812 7fb8e7682700 20 bluestore.BufferSpace(0x565450609738 in 0x56544dc495e0) finish_write discard buffer(0x56545177df80 space 0x565450609738 0x0~80000 writing nocache)
2017-07-08 11:29:33.948822 7fb8e7682700 20 bluestore.BufferSpace(0x565450609c08 in 0x56544dc495e0) finish_write discard buffer(0x56545177de60 space 0x565450609c08 0x30000~7000 writing nocache)
2017-07-08 11:29:33.948836 7fb8e7682700 20 bluestore(/var/lib/ceph/osd/ceph-3) _txc_finish  txc 0x56545091eec0 done
2017-07-08 11:29:33.948838 7fb8e7682700 20 bluestore(/var/lib/ceph/osd/ceph-3) _txc_finish  txc 0x56544ddbd440 done
2017-07-08 11:29:33.948840 7fb8e7682700 20 bluestore(/var/lib/ceph/osd/ceph-3) _txc_finish  txc 0x56545091e680 deferred_queued
2017-07-08 11:29:33.948842 7fb8e7682700 10 bluestore(/var/lib/ceph/osd/ceph-3) _txc_release_alloc 0x56545091eec0 []
2017-07-08 11:29:33.948864 7fb8e7682700 10 bluestore(/var/lib/ceph/osd/ceph-3) _txc_release_alloc 0x56544ddbd440 []

  Does anyone have this problem?

-------------------------------------------------------------------------------------------------------------------------------------
本邮件及其附件含有新华三技术有限公司的保密信息,仅限于发送给上面地址中列出
的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、
或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本
邮件!
This e-mail and its attachments contain confidential information from New H3C, which is
intended only for the person or entity whose address is listed above. Any use of the
information contained herein in any way (including, but not limited to, total or partial
disclosure, reproduction, or dissemination) by persons other than the intended
recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender
by phone or email immediately and delete it!
?韬{.n?????%??檩??w?{.n????u朕?Ф?塄}?财??j:+v??????2??璀??摺?囤??z夸z罐?+?????w棹f




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux