Re: operate one file in multi clients with libceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

    Could any one give me an answer?

   My application need to open one file with two clients in different
hosts. One for write/read and one for read only. The client could be
developed based on libceph or librbd.
   I tried librbd, exception appear too.


Thx very much! Really need your help...

Simon



2011/5/17 Simon Tian <aixt2006@xxxxxxxxx>:
> Hi folks,
>
> Â Â Â When I write and read a file in client A, open with
> ceph_open(test_path, O_RDWR|O_CREAT,0 ), and read the same file in
> client B, with ceph_open(test_path, O_RDONLY, 0). ÂClient A and B are
> running on different host.
> Â Â Â After a while, client A will throw a exception. This exception
> appear every time.
>
> Â Â Â The back trace in ceph is:
>
> (gdb) bt
> #0 Â0x000000367fa30265 in raise () from /lib64/libc.so.6
> #1 Â0x000000367fa31d10 in abort () from /lib64/libc.so.6
> #2 Â0x0000003682ebec44 in __gnu_cxx::__verbose_terminate_handler() ()
> from /usr/lib64/libstdc++.so.6
> #3 Â0x0000003682ebcdb6 in ?? () from /usr/lib64/libstdc++.so.6
> #4 Â0x0000003682ebcde3 in std::terminate() () from /usr/lib64/libstdc++.so.6
> #5 Â0x0000003682ebceca in __cxa_throw () from /usr/lib64/libstdc++.so.6
> #6 Â0x00007ffff7c51a78 in ceph::__ceph_assert_fail
> (assertion=0x7ffff7ce0a1c "r == 0", file=0x7ffff7ce0a02
> "common/Mutex.h", line=118,
> Â Âfunc=0x7ffff7ce0ca0 "void Mutex::Lock(bool)") at common/assert.cc:86
> #7 Â0x00007ffff7b1ee1c in Mutex::Lock (this=0x6293f0,
> no_lockdep=false) at common/Mutex.h:118
> #8 Â0x00007ffff7b395f4 in Client::sync_write_commit (this=0x629090,
> in=0x7ffff0001b50) at client/Client.cc:4979
> #9 Â0x00007ffff7baf304 in C_Client_SyncCommit::finish
> (this=0x7ffff3206300) at client/Client.cc:4973
> #10 0x00007ffff7c951d5 in Objecter::handle_osd_op_reply
> (this=0x62b420, m=0x632190) at osdc/Objecter.cc:806
> #11 0x00007ffff7b56038 in Client::ms_dispatch (this=0x629090,
> m=0x632190) at client/Client.cc:1414
> #12 0x00007ffff7bcb01d in Messenger::ms_deliver_dispatch
> (this=0x628350, m=0x632190) at msg/Messenger.h:98
> #13 0x00007ffff7bb262b in SimpleMessenger::dispatch_entry
> (this=0x628350) at msg/SimpleMessenger.cc:352
> #14 0x00007ffff7b22641 in SimpleMessenger::DispatchThread::entry
> (this=0x6287d8) at msg/SimpleMessenger.h:533
> #15 0x00007ffff7b5aa28 in Thread::_entry_func (arg=0x6287d8) at
> ./common/Thread.h:41
> #16 0x00000036802064a7 in start_thread () from /lib64/libpthread.so.0
> #17 0x000000367fad3c2d in clone () from /lib64/libc.so.6
>
> So is there any way to avoid this?
>
>
> Thx!
> Simon
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux