Re: Bluestore OSD support in ceph-disk

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



If you are not running with latest master, could you please retry with
latest master. https://github.com/ceph/ceph/pull/11095 should solve the
problem.

if you are hitting the problem with the latest master, please post the
logs in shared location like google drive or pastebin etc...

Varada

On Monday 19 September 2016 05:58 AM, Kamble, Nitin A wrote:
> I find the ceph-osd processes which are taking 100% cpu are all have common log last line.
>
>
> It means the log rotation has triggered, and it takes forever to finish.
> host5:~ # ls -lh /var/log/ceph/ceph-osd.24*
> -rw-r----- 1 ceph ceph    0 Sep 18 17:00 /var/log/ceph/ceph-osd.24.log
> -rw-r----- 1 ceph ceph 1.4G Sep 18 17:00 /var/log/ceph/ceph-osd.24.log-20160918
>
> host5:~ # tail /var/log/ceph/ceph-osd.24.log-20160918
> 2016-09-18 11:36:18.292275 7fab858dc700 10 bluefs get_usage bdev 2 free 160031571968 (149 GB) / 160032612352 (149 GB), used 0%
> 2016-09-18 11:36:18.292279 7fab858dc700 10 bluefs _flush 0x7fac47a5dd00 ignoring, length 3310 < min_flush_size 65536
> 2016-09-18 11:36:18.292280 7fab858dc700 10 bluefs _flush 0x7fac47a5dd00 ignoring, length 3310 < min_flush_size 65536
> 2016-09-18 11:36:18.292281 7fab858dc700 10 bluefs _fsync 0x7fac47a5dd00 file(ino 24 size 0x3d7cdc5 mtime 2016-09-18 11:36:04.164949 bdev 0 extents [0:0xe100000+d00000,0:0xf200000+e00000,1:0x10000000+2100000,0:0x100000+200000])
> 2016-09-18 11:36:18.292286 7fab858dc700 10 bluefs _flush 0x7fac47a5dd00 0x1b10000~cee to file(ino 24 size 0x3d7cdc5 mtime 2016-09-18 11:36:04.164949 bdev 0 extents [0:0xe100000+d00000,0:0xf200000+e00000,1:0x10000000+2100000,0:0x100000+200000])
> 2016-09-18 11:36:18.292289 7fab858dc700 10 bluefs _flush_range 0x7fac47a5dd00 pos 0x1b10000 0x1b10000~cee to file(ino 24 size 0x3d7cdc5 mtime 2016-09-18 11:36:04.164949 bdev 0 extents [0:0xe100000+d00000,0:0xf200000+e00000,1:0x10000000+2100000,0:0x100000+200000])
> 2016-09-18 11:36:18.292292 7fab858dc700 20 bluefs _flush_range file now file(ino 24 size 0x3d7cdc5 mtime 2016-09-18 11:36:04.164949 bdev 0 extents [0:0xe100000+d00000,0:0xf200000+e00000,1:0x10000000+2100000,0:0x100000+200000])
> 2016-09-18 11:36:18.292296 7fab858dc700 20 bluefs _flush_range in 1:0x10000000+2100000 x_off 0x10000
> 2016-09-18 11:36:18.292297 7fab858dc700 20 bluefs _flush_range caching tail of 0xcee and padding block with zeros
> 2016-09-18 17:00:01.276990 7fab738b8700 -1 received  signal: Hangup from  PID: 89063 task name: killall -q -1 ceph-mon ceph-mds ceph-osd ceph-fuse radosgw  UID: 0
>
> Further one of the osd process has crashed with this in the log:
>
> 2016-09-18 13:30:11.274012 7fdf399b8700 -1 /build/nitin/nightly_builds/20160914_125459-master/ceph.git/rpmbuild/BUILD/ceph-v11.0.0-2309.g9096ad3/src/os/bluestore/KernelDevice.cc: In function 'virtual void KernelDevice::aio_submit(IOContext*)' thread 7fdf399b8700 time 2016-09-18 13:30:11.270019
> /build/nitin/nightly_builds/20160914_125459-master/ceph.git/rpmbuild/BUILD/ceph-v11.0.0-2309.g9096ad3/src/os/bluestore/KernelDevice.cc: 370: FAILED assert(r == 0)
>
>  ceph version v11.0.0-2309-g9096ad3 (9096ad37f2c0798c26d7784fb4e7a781feb72cb8)
>  1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x8b) [0x7fdf4f73811b]
>  2: (KernelDevice::aio_submit(IOContext*)+0x76d) [0x7fdf4f597dbd]
>  3: (BlueFS::_flush_range(BlueFS::FileWriter*, unsigned long, unsigned long)+0xcbd) [0x7fdf4f575b6d]
>  4: (BlueFS::_flush(BlueFS::FileWriter*, bool)+0xe9) [0x7fdf4f576c79]
>  5: (BlueFS::_fsync(BlueFS::FileWriter*, std::unique_lock<std::mutex>&)+0x6d) [0x7fdf4f579a6d]
>  6: (BlueRocksWritableFile::Sync()+0x4e) [0x7fdf4f58f25e]
>  7: (rocksdb::WritableFileWriter::SyncInternal(bool)+0x139) [0x7fdf4f686699]
>  8: (rocksdb::WritableFileWriter::Sync(bool)+0x88) [0x7fdf4f687238]
>  9: (rocksdb::DBImpl::WriteImpl(rocksdb::WriteOptions const&, rocksdb::WriteBatch*, rocksdb::WriteCallback*, unsigned long*, unsigned long, bool)+0x13cf) [0x7fdf4f5dea2f]
>  10: (rocksdb::DBImpl::Write(rocksdb::WriteOptions const&, rocksdb::WriteBatch*)+0x27) [0x7fdf4f5df637]
>  11: (RocksDBStore::submit_transaction_sync(std::shared_ptr<KeyValueDB::TransactionImpl>)+0x5b) [0x7fdf4f51814b]
>  12: (BlueStore::_kv_sync_thread()+0xf5a) [0x7fdf4f4e5ffa]
>  13: (BlueStore::KVSyncThread::entry()+0xd) [0x7fdf4f4f3a6d]
>  14: (()+0x80a4) [0x7fdf4b7a50a4]
>  15: (clone()+0x6d) [0x7fdf4a61e04d]
>  NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
>
> This time I have captured the log with debug bluefs = 20/20
>
> Is there a good place where I can upload the trail of the log for sharing?
>
> Thanks,
> Nitin
>
>
>
>

PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux