Ceph Fuse Crashed when Reading and How to Backup the data

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



When I read the file through the ceph-fuse, the process crashed.

Here is the log -
====================
terminate called after throwing an instance of 'ceph::buffer::end_of_buffer'
  what():  buffer::end_of_buffer
*** Caught signal (Aborted) **
 in thread 7fe0814d3700
 ceph version 0.94.1 (e4bfad3a3c51054df7e537a724c8d0bf9be972ff)
 1: (()+0x249805) [0x7fe08670b805]
 2: (()+0x10d10) [0x7fe085c39d10]
 3: (gsignal()+0x37) [0x7fe0844d3267]
 4: (abort()+0x16a) [0x7fe0844d4eca]
 5: (__gnu_cxx::__verbose_terminate_handler()+0x16d) [0x7fe084de706d]
 6: (()+0x5eee6) [0x7fe084de4ee6]
 7: (()+0x5ef31) [0x7fe084de4f31]
 8: (()+0x5f149) [0x7fe084de5149]
9: (ceph::buffer::list::substr_of(ceph::buffer::list const&, unsigned int, unsigned int)+0x24b) [0x7fe08688993b] 10: (ObjectCacher::_readx(ObjectCacher::OSDRead*, ObjectCacher::ObjectSet*, Context*, bool)+0x1423) [0x7fe0866c6b73]
 11: (ObjectCacher::C_RetryRead::finish(int)+0x20) [0x7fe0866cd870]
 12: (Context::complete(int)+0x9) [0x7fe086687eb9]
13: (void finish_contexts<Context>(CephContext*, std::list<Context*, std::allocator<Context*> >&, int)+0xac) [0x7fe0866ca73c] 14: (ObjectCacher::bh_read_finish(long, sobject_t, unsigned long, long, unsigned long, ceph::buffer::list&, int, bool)+0x29e) [0x7fe0866bfd2e]
 15: (ObjectCacher::C_ReadFinish::finish(int)+0x7f) [0x7fe0866cc85f]
 16: (Context::complete(int)+0x9) [0x7fe086687eb9]
 17: (C_Lock::finish(int)+0x29) [0x7fe086688269]
 18: (Context::complete(int)+0x9) [0x7fe086687eb9]
 19: (Finisher::finisher_thread_entry()+0x1b4) [0x7fe08671f184]
 20: (()+0x76aa) [0x7fe085c306aa]
 21: (clone()+0x6d) [0x7fe0845a4eed]
=============================
Some part may be interesting -
-11> 2015-04-30 15:55:59.063828 7fd6a816c700 10 -- 172.30.11.188:0/10443 >> 172.16.3.153:6820/1532355 pipe(0x7fd6740344c0 sd=8 :58596 s=2 pgs=3721 cs=1 l=1 c=0x7fd674038760).reader got message 1 0x7fd65c001940 osd_op_reply(1 10000000019.00000000 [read 0~4390] v0'0 uv0 ack = -1 ((1) Operation not permitted)) v6 -10> 2015-04-30 15:55:59.063848 7fd6a816c700 1 -- 172.30.11.188:0/10443 <== osd.9 172.16.3.153:6820/1532355 1 ==== osd_op_reply(1 10000000019.00000000 [read 0~4390] v0'0 uv0 ack = -1 ((1) Operation not permitted)) v6 ==== 187+0+0 (689339676 0 0) 0x7fd65c001940 con 0x7fd674038760


And the cephfs-journal seems okay.

Could anyone tell me why it is happening?

And more important, does Ceph offer any tool to export a cephfs data from underlaid pools?

Thanks very much!

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux