Re: =?iso8859-7?q?osd_crash_because_rocksdb_report_=A0?==?iso8859-7?q?=A1Compaction_error=3A_Corruption=3A_block_checksum_mismatc?==?iso8859-7?q?h=A2?=

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 15 Sep 2017, wei.qiaomiao@xxxxxxxxxx wrote:
> 
> Hi,all   
> 
>    My cluster running  12.2.0  with bluestore, we used fio tool with
> librbd ioengine make io test  yesterday, and serval osds crash one after
> another.
> 
>    3 * node, 30 OSD, 1TB SATA HDD for OSD data, 1GB SATA SSD  partition for
> db, 576 MB SATA SSD partition for wal.
> 
>    ceph options:
> 
>    bluestore_shard_finishers = true
>    mon_osd_prime_pg_temp = false
>    mon_allow_pool_delete = true
>    mgr_op_latency_sample_interval = 300

All of the crashed OSDs had the same rocksdb corruption error?  What kind 
of hardware (or vm?) are you using?

Also,

rocksdb:[/clove/vm/clove/ceph/rpmbuild/BUILD/ceph-12.2.0/src/rocksdb/db/compaction_job.cc:1403]  
...

it looks like this is a custom build?  Are there any changes to the 
source code?

Thanks!
sage
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux