ceph-fuse couldn't be connect.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thank you, Greg!

I solved it through creating MDS.

- Jae


On Wed, Jul 16, 2014 at 8:36 PM, Gregory Farnum <greg at inktank.com> wrote:

> Your MDS isn't running or isn't active.
> -Greg
>
>
> On Wednesday, July 16, 2014, Jaemyoun Lee <jaemyoun at gmail.com> wrote:
>
>>
>> The result is same.
>>
>> # ceph-fuse --debug-ms 1 --debug-client 10 -m 192.168.122.106:6789 /mnt
>> ceph-fuse[3296] :  starting ceph client
>>
>> And the log file is
>>
>> # cat /var/log/ceph/ceph-client.admin.log
>> 2014-07-16 17:08:13.146032 7f9a212f87c0  0 ceph version 0.80.1
>> (a38fe1169b6d2ac98b427334c12d7cf81f809b74), process ceph-fuse, pid 3294
>> 2014-07-16 17:08:13.156429 7f9a212f87c0  1 -- :/0 messenger.start
>> 2014-07-16 17:08:13.157537 7f9a212f87c0  1 -- :/3296 -->
>> 192.168.122.106:6789/0 -- auth(proto 0 30 bytes epoch 0) v1 -- ?+0
>> 0x7f9a23c0e6c0 con 0x7f9a23c0dd30
>> 2014-07-16 17:08:13.158198 7f9a212f6700  1 -- 192.168.122.166:0/3296
>> learned my addr 192.168.122.166:0/3296
>> 2014-07-16 17:08:13.158505 7f9a167fc700 10 client.-1 ms_handle_connect on
>> 192.168.122.106:6789/0
>> 2014-07-16 17:08:13.159083 7f9a167fc700  1 -- 192.168.122.166:0/3296 <==
>> mon.0 192.168.122.106:6789/0 1 ==== mon_map v1 ==== 193+0+0 (4132823754
>> 0 0) 0x7f9a00000ab0 con 0x7f9a23c0dd30
>> 2014-07-16 17:08:13.159182 7f9a167fc700  1 -- 192.168.122.166:0/3296 <==
>> mon.0 192.168.122.106:6789/0 2 ==== auth_reply(proto 2 0 (0) Success) v1
>> ==== 33+0+0 (1915318666 0 0) 0x7f9a00000f60 con 0x7f9a23c0dd30
>> 2014-07-16 17:08:13.159375 7f9a167fc700  1 -- 192.168.122.166:0/3296 -->
>> 192.168.122.106:6789/0 -- auth(proto 2 32 bytes epoch 0) v1 -- ?+0
>> 0x7f9a0c0013a0 con 0x7f9a23c0dd30
>> 2014-07-16 17:08:13.159845 7f9a167fc700  1 -- 192.168.122.166:0/3296 <==
>> mon.0 192.168.122.106:6789/0 3 ==== auth_reply(proto 2 0 (0) Success) v1
>> ==== 206+0+0 (2967970554 0 0) 0x7f9a00000f60 con 0x7f9a23c0dd30
>> 2014-07-16 17:08:13.159976 7f9a167fc700  1 -- 192.168.122.166:0/3296 -->
>> 192.168.122.106:6789/0 -- auth(proto 2 165 bytes epoch 0) v1 -- ?+0
>> 0x7f9a0c001ec0 con 0x7f9a23c0dd30
>> 2014-07-16 17:08:13.160810 7f9a167fc700  1 -- 192.168.122.166:0/3296 <==
>> mon.0 192.168.122.106:6789/0 4 ==== auth_reply(proto 2 0 (0) Success) v1
>> ==== 409+0+0 (3799435439 0 0) 0x7f9a000011d0 con 0x7f9a23c0dd30
>> 2014-07-16 17:08:13.160945 7f9a167fc700  1 -- 192.168.122.166:0/3296 -->
>> 192.168.122.106:6789/0 -- mon_subscribe({osdmap=0}) v2 -- ?+0
>> 0x7f9a23c102c0 con 0x7f9a23c0dd30
>> 2014-07-16 17:08:13.160979 7f9a167fc700  1 -- 192.168.122.166:0/3296 -->
>> 192.168.122.106:6789/0 -- mon_subscribe({mdsmap=0+,osdmap=0}) v2 -- ?+0
>> 0x7f9a23c10630 con 0x7f9a23c0dd30
>> 2014-07-16 17:08:13.161033 7f9a212f87c0  2 client.4705 mounted: have
>> osdmap 0 and mdsmap 0
>> 2014-07-16 17:08:13.161056 7f9a212f87c0 10 client.4705 did not get mds
>> through better means, so chose random mds -1
>> 2014-07-16 17:08:13.161059 7f9a212f87c0 10 client.4705  target mds.-1 not
>> active, waiting for new mdsmap
>> 2014-07-16 17:08:13.161668 7f9a167fc700  1 -- 192.168.122.166:0/3296 <==
>> mon.0 192.168.122.106:6789/0 5 ==== osd_map(45..45 src has 1..45) v3
>> ==== 3907+0+0 (2386867192 0 0) 0x7f9a00002060 con 0x7f9a23c0dd30
>> 2014-07-16 17:08:13.161843 7f9a167fc700  1 -- 192.168.122.166:0/3296 <==
>> mon.0 192.168.122.106:6789/0 6 ==== mdsmap(e 1) v1 ==== 396+0+0
>> (394292161 0 0) 0x7f9a00002500 con 0x7f9a23c0dd30
>> 2014-07-16 17:08:13.161861 7f9a167fc700  1 client.4705 handle_mds_map
>> epoch 1
>> 2014-07-16 17:08:13.161884 7f9a167fc700  1 -- 192.168.122.166:0/3296 <==
>> mon.0 192.168.122.106:6789/0 7 ==== osd_map(45..45 src has 1..45) v3
>> ==== 3907+0+0 (2386867192 0 0) 0x7f9a000037a0 con 0x7f9a23c0dd30
>> 2014-07-16 17:08:13.161900 7f9a167fc700  1 -- 192.168.122.166:0/3296 <==
>> mon.0 192.168.122.106:6789/0 8 ==== mon_subscribe_ack(300s) v1 ====
>> 20+0+0 (4226112827 0 0) 0x7f9a00003c40 con 0x7f9a23c0dd30
>> 2014-07-16 17:08:13.161932 7f9a212f87c0 10 client.4705 did not get mds
>> through better means, so chose random mds -1
>> 2014-07-16 17:08:13.161942 7f9a212f87c0 10 client.4705  target mds.-1 not
>> active, waiting for new mdsmap
>> 2014-07-16 17:08:14.161453 7f9a177fe700 10 client.4705 renew_caps()
>> 2014-07-16 17:08:34.166977 7f9a177fe700 10 client.4705 renew_caps()
>> 2014-07-16 17:08:54.171234 7f9a177fe700 10 client.4705 renew_caps()
>> 2014-07-16 17:09:14.174106 7f9a177fe700 10 client.4705 renew_caps()
>> 2014-07-16 17:09:34.177062 7f9a177fe700 10 client.4705 renew_caps()
>> 2014-07-16 17:09:54.179365 7f9a177fe700 10 client.4705 renew_caps()
>> 2014-07-16 17:10:14.181731 7f9a177fe700 10 client.4705 renew_caps()
>> 2014-07-16 17:10:34.184270 7f9a177fe700 10 client.4705 renew_caps()
>> 2014-07-16 17:10:46.161158 7f9a15ffb700  1 -- 192.168.122.166:0/3296 -->
>> 192.168.122.106:6789/0 -- mon_subscribe({mdsmap=2+,monmap=2+}) v2 -- ?+0
>> 0x7f99f8002c50 con 0x7f9a23c0dd30
>> 2014-07-16 17:10:46.161770 7f9a167fc700  1 -- 192.168.122.166:0/3296 <==
>> mon.0 192.168.122.106:6789/0 9 ==== mon_subscribe_ack(300s) v1 ====
>> 20+0+0 (4226112827 0 0) 0x7f9a00003c40 con 0x7f9a23c0dd30
>> 2014-07-16 17:10:54.186908 7f9a177fe700 10 client.4705 renew_caps()
>> 2014-07-16 17:11:14.189613 7f9a177fe700 10 client.4705 renew_caps()
>> 2014-07-16 17:11:34.192055 7f9a177fe700 10 client.4705 renew_caps()
>> 2014-07-16 17:11:54.194663 7f9a177fe700 10 client.4705 renew_caps()
>> 2014-07-16 17:12:14.196991 7f9a177fe700 10 client.4705 renew_caps()
>> 2014-07-16 17:12:34.199710 7f9a177fe700 10 client.4705 renew_caps()
>> 2014-07-16 17:12:54.202389 7f9a177fe700 10 client.4705 renew_caps()
>> 2014-07-16 17:13:14.205054 7f9a177fe700 10 client.4705 renew_caps()
>> 2014-07-16 17:13:16.164362 7f9a15ffb700  1 -- 192.168.122.166:0/3296 -->
>> 192.168.122.106:6789/0 -- mon_subscribe({mdsmap=2+,monmap=2+}) v2 -- ?+0
>> 0x7f99f8005030 con 0x7f9a23c0dd30
>> 2014-07-16 17:13:16.165178 7f9a167fc700  1 -- 192.168.122.166:0/3296 <==
>> mon.0 192.168.122.106:6789/0 10 ==== mon_subscribe_ack(300s) v1 ====
>> 20+0+0 (4226112827 0 0) 0x7f9a00003c40 con 0x7f9a23c0dd30
>> 2014-07-16 17:13:34.207622 7f9a177fe700 10 client.4705 renew_caps()
>> 2014-07-16 17:13:54.209995 7f9a177fe700 10 client.4705 renew_caps()
>> 2014-07-16 17:14:14.212277 7f9a177fe700 10 client.4705 renew_caps()
>> 2014-07-16 17:14:34.214827 7f9a177fe700 10 client.4705 renew_caps()
>>
>> Thx~
>> - Jae
>>
>>
>>
>> On Wed, Jul 16, 2014 at 2:20 AM, Gregory Farnum <greg at inktank.com> wrote:
>>
>>> On Tue, Jul 15, 2014 at 10:15 AM, Jaemyoun Lee <jaemyoun at gmail.com>
>>> wrote:
>>> > The output is nothing because ceph-fuse fell into an infinite while
>>> loop as
>>> > I explain below.
>>> >
>>> > Where can I find the log file of ceph-fuse?
>>>
>>> It defaults to /var/log/ceph, but it may be empty. I realize the task
>>> may have hung, but I'm pretty sure it isn't looping, just waiting on
>>> some kind of IO. You could try running it with the "--debug-ms 1
>>> --debug-client 10" command-line options appended and see what it spits
>>> out.
>>> -Greg
>>> Software Engineer #42 @ http://inktank.com | http://ceph.com
>>>
>>
>>
>>
>> --
>>   ??? Jaemyoun Lee
>>
>>   E-mail : jaemyoun at gmail.com
>>   Homepage : http://jaemyoun.com
>>   Facebook :  https://www.facebook.com/jaemyoun
>>
>
>
> --
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>



-- 
  ??? Jaemyoun Lee

  E-mail : jaemyoun at gmail.com
  Homepage : http://jaemyoun.com
  Facebook :  https://www.facebook.com/jaemyoun
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140717/9796d505/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux