Re: ceph-fuse auto down

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I don't think that that caused the problem because you said to me before.

> All clients use same ceph-fuse version. All of them by this problem
> troubled. Just crash time different. <--

But it's better to double-check. 

So can you change that schedule to whenever you are able to monitor client to see if it cause problem or not?
And can you make sure module is there now?

Shinobu

----- Original Message -----
From: "谷枫" <feicheche@xxxxxxxxx>
To: "Shinobu Kinjo" <skinjo@xxxxxxxxxx>
Cc: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
Sent: Monday, September 14, 2015 10:57:31 AM
Subject: Re:  ceph-fuse auto down

The logrotate run at 6:25 everyday (in crontab i saw this -- below red line)

cat /etc/crontab

SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

# m h dom mon dow user command
17 * * * * root    cd / && run-parts --report /etc/cron.hourly
*25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report
/etc/cron.daily )*
47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report
/etc/cron.weekly )
52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report
/etc/cron.monthly )
#

The time same to the crash time mainly.

2015-09-14 9:48 GMT+08:00 谷枫 <feicheche@xxxxxxxxx>:

> Hi,Shinobu
>
> I found the logrotate script at /etc/logrotate.d/ceph. In this script osd
> mon mds will be reload when rotate done.
> The logrotate and the ceph-fuse crash at same time mainly.
> So i think the problem with this matter.
> How do you think?
>
>
> The code snippet in  /etc/logrotate.d/ceph:
> *******************************************
> for daemon in osd mon mds ; do
>               find -L /var/lib/ceph/$daemon/ -mindepth 1 -maxdepth 1
> -regextype posix-egrep -regex '.*/[A-Za-z0-9]+-[A-Za-z0-9._-]+' -printf
> '%P\n' \
>                 | while read f; do
>                     if [ -e "/var/lib/ceph/$daemon/$f/done" -o -e
> "/var/lib/ceph/$daemon/$f/ready" ] && [ -e
> "/var/lib/ceph/$daemon/$f/upstart" ] && [ ! -e
> "/var/lib/ceph/$daemon/$f/sysvinit" ]; then
>                       cluster="${f%%-*}"
>                       id="${f#*-}"
>
>                       initctl reload ceph-$daemon cluster="$cluster"
> id="$id" 2>/dev/null || :
>                     fi
>                   done
>             done
> *******************************************
> Thank you!
>
>
>
> 2015-09-14 8:50 GMT+08:00 谷枫 <feicheche@xxxxxxxxx>:
>
>> I attach filesystem to local use this command: ceph-fuse -k
>> /etc/ceph.new/ceph.client.admin.keyring -m 10.3.1.11,10.3.1.12,
>> 10.3.1.13:6789 /data.
>>
>> The key is right.
>> I attach the client1.tar in last mail. Please check it . Thank you !
>>
>> 2015-09-13 15:12 GMT+08:00 Shinobu Kinjo <skinjo@xxxxxxxxxx>:
>>
>>> How do you attach filesystem to local file?
>>>
>>> Make sure, keyring is located at:
>>>
>>>   /etc/ceph.new/ceph.client.admin.keyring
>>>
>>> And your cluster, public networks are fine.
>>>
>>> If you face same problem again, check:
>>>
>>>   uptime
>>>
>>> And how about this:
>>>
>>> >   tar cvf <host name>.tar \
>>> >   /sys/class/net/<interface name>/statistics/*
>>>
>>> When did you face this issue?
>>> From the beginning or...?
>>>
>>> Shinobu
>>>
>>> ----- Original Message -----
>>> From: "谷枫" <feicheche@xxxxxxxxx>
>>> To: "Shinobu Kinjo" <skinjo@xxxxxxxxxx>
>>> Cc: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
>>> Sent: Sunday, September 13, 2015 12:06:25 PM
>>> Subject: Re:  ceph-fuse auto down
>>>
>>> All clients use same ceph-fuse version. All of them by this problem
>>> troubled. Just crash time different.
>>>
>>>
>>> 2015-09-13 10:39 GMT+08:00 Shinobu Kinjo <skinjo@xxxxxxxxxx>:
>>>
>>> > So you are using same version on other clients?
>>> > But only one client has problem?
>>> >
>>> > Can you provide:
>>> >
>>> >   /sys/class/net/<interface name>/statistics/*
>>> >
>>> > just do:
>>> >
>>> >   tar cvf <host name>.tar \
>>> >   /sys/class/net/<interface name>/statistics/*
>>> >
>>> > Can you hold when same issue happen next?
>>> > No reboot is necessary.
>>> >
>>> > But if you have to reboot, of course you can.
>>> >
>>> > Shinobu
>>> >
>>> > ----- Original Message -----
>>> > From: "谷枫" <feicheche@xxxxxxxxx>
>>> > To: "Shinobu Kinjo" <skinjo@xxxxxxxxxx>
>>> > Cc: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
>>> > Sent: Sunday, September 13, 2015 11:30:57 AM
>>> > Subject: Re:  ceph-fuse auto down
>>> >
>>> > Yes, when some ceph-fuse crash , the mount driver has gone, and can't
>>> > remount . Reboot the server is the only way I can do.
>>> > But other client with ceph-fuse mount on them working well. Can
>>> writing /
>>> > reading data on them.
>>> >
>>> > ceph-fuse --version
>>> > ceph version 0.94.3 (95cefea9fd9ab740263bf8bb4796fd864d9afe2b)
>>> >
>>> > ceph -s
>>> > cluster 0fddc8e0-9e64-4049-902a-2f0f6d531630
>>> >      health HEALTH_OK
>>> >      monmap e1: 3 mons at {ceph01=
>>> > 10.3.1.11:6789/0,ceph02=10.3.1.12:6789/0,ceph03=10.3.1.13:6789/0}
>>> >             election epoch 8, quorum 0,1,2 ceph01,ceph02,ceph03
>>> >      mdsmap e29: 1/1/1 up {0=ceph04=up:active}, 1 up:standby
>>> >      osdmap e26: 4 osds: 4 up, 4 in
>>> >       pgmap v94931: 320 pgs, 3 pools, 90235 MB data, 241 kobjects
>>> >             289 GB used, 1709 GB / 1999 GB avail
>>> >                  320 active+clean
>>> >   client io 1023 kB/s rd, 1210 kB/s wr, 72 op/s
>>> >
>>> > 2015-09-13 10:23 GMT+08:00 Shinobu Kinjo <skinjo@xxxxxxxxxx>:
>>> >
>>> > > Can you give us package version of ceph-fuse?
>>> > >
>>> > > > Multi ceph-fuse crash just now today.
>>> > >
>>> > > Did you just mount filesystem or was there any
>>> > > activity on filesystem?
>>> > >
>>> > >   e.g: writing / reading data
>>> > >
>>> > > Can you give us output of on cluster side:
>>> > >
>>> > >   ceph -s
>>> > >
>>> > > Shinobu
>>> > >
>>> > > ----- Original Message -----
>>> > > From: "谷枫" <feicheche@xxxxxxxxx>
>>> > > To: "Shinobu Kinjo" <skinjo@xxxxxxxxxx>
>>> > > Cc: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
>>> > > Sent: Sunday, September 13, 2015 10:51:35 AM
>>> > > Subject: Re:  ceph-fuse auto down
>>> > >
>>> > > sorry Shinobu,
>>> > > I don't understand what's the means what you pasted.
>>> > > Multi ceph-fuse crash just now today.
>>> > > The ceph-fuse completely unusable for me now.
>>> > > Maybe i must change the kernal mount with it.
>>> > >
>>> > > 2015-09-12 20:08 GMT+08:00 Shinobu Kinjo <skinjo@xxxxxxxxxx>:
>>> > >
>>> > > > In _usr_bin_ceph-fuse.0.crash.client2.tar
>>> > > >
>>> > > > What I'm seeing now is:
>>> > > >
>>> > > >   3 Date: Sat Sep 12 06:37:47 2015
>>> > > >  ...
>>> > > >   6 ExecutableTimestamp: 1440614242
>>> > > >  ...
>>> > > >   7 ProcCmdline: ceph-fuse -k
>>> /etc/ceph.new/ceph.client.admin.keyring
>>> > -m
>>> > > > 10.3.1.11,10.3.1.12,10.3.1.13 /grdata
>>> > > >  ...
>>> > > >  30  7f32de7fe000-7f32deffe000 rw-p 00000000 00:00 0
>>> > > >     [stack:17270]
>>> > > >  ...
>>> > > > 250  7f341021d000-7f3410295000 r-xp 00000000 fd:01 267219
>>> > > >    /usr/lib/x86_64-linux-gnu/nss/libfreebl3.so
>>> > > >  ...
>>> > > > 255  7f341049b000-7f341054f000 r-xp 00000000 fd:01 266443
>>> > > >    /usr/lib/x86_64-linux-gnu/libsqlite3.so.0.8.6
>>> > > >  ...
>>> > > > 260  7f3410754000-7f3410794000 r-xp 00000000 fd:01 267222
>>> > > >    /usr/lib/x86_64-linux-gnu/nss/libsoftokn3.so
>>> > > >  ...
>>> > > > 266  7f3411197000-7f341119a000 r-xp 00000000 fd:01 264953
>>> > > >    /usr/lib/x86_64-linux-gnu/libplds4.so
>>> > > >  ...
>>> > > > 271  7f341139f000-7f341159e000 ---p 00004000 fd:01 264955
>>> > > >    /usr/lib/x86_64-linux-gnu/libplc4.so
>>> > > >  ...
>>> > > > 274  7f34115a0000-7f34115c5000 r-xp 00000000 fd:01 267214
>>> > > >    /usr/lib/x86_64-linux-gnu/libnssutil3.so
>>> > > >  ...
>>> > > > 278  7f34117cb000-7f34117ce000 r-xp 00000000 fd:01 1189512
>>> > > >     /lib/x86_64-linux-gnu/libdl-2.19.so
>>> > > >  ...
>>> > > > 287  7f3411d94000-7f3411daa000 r-xp 00000000 fd:01 1179825
>>> > > >     /lib/x86_64-linux-gnu/libgcc_s.so.1
>>> > > >  ...
>>> > > > 294  7f34122b0000-7f3412396000 r-xp 00000000 fd:01 266069
>>> > > >    /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.19
>>> > > >  ...
>>> > > > 458  State: D (disk sleep)
>>> > > >  ...
>>> > > > 359  VmPeak:     5250648 kB
>>> > > > 360  VmSize:     4955592 kB
>>> > > >  ...
>>> > > >
>>> > > > What were you trying to do?
>>> > > >
>>> > > > Shinobu
>>> > > >
>>> > > >
>>> > >
>>> >
>>>
>>
>>
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux