Re: cephfs-client Segmentation fault with not-root mount point

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Thank you for your reply.
I will recomplie the code and test if it works.
I will let you know if it works.


At 2016-09-18 19:18:18, "Goncalo Borges" <goncalo.borges@xxxxxxxxxxxxx> wrote: >Hi... > >I think you are seeing an issue we saw some time ago. Your segfault seems the same we had but please confirm against the info in > >https://github.com/ceph/ceph/pull/10027 > >We solve it by recompiling ceph with the patch described above. > >I think it should be solved in the next bug release version. > >Cheers >G. > > >________________________________________ >From: ceph-users [ceph-users-bounces@xxxxxxxxxxxxxx] on behalf of yu2xiangyang [yu2xiangyang@xxxxxxx] >Sent: 18 September 2016 12:14 >To: ceph-users@xxxxxxxxxxxxxx >Subject: [ceph-users] cephfs-client Segmentation fault with not-root mount point > >My envirenment is displayed below: > >The ceph-fuse client is 10.2.2, and the ceph osd is 0.94.3, details beblow: > >[root@localhost ~]# rpm -qa | grep ceph > >libcephfs1-10.2.2-0.el7.centos.x86_64 > >python-cephfs-10.2.2-0.el7.centos.x86_64 > >ceph-common-0.94.3-0.el7.x86_64 > >ceph-fuse-10.2.2-0.el7.centos.x86_64 > >ceph-0.94.3-0.el7.x86_64 > >ceph-mds-10.2.2-0.el7.centos.x86_64 > > > > > > [root@localhost ~]# rpm -qa | grep rados > >librados2-devel-0.94.3-0.el7.x86_64 > >librados2-0.94.3-0.el7.x86_64 > >libradosstriper1-0.94.3-0.el7.x86_64 > >python-rados-0.94.3-0.el7.x86_64 > > When I mount the cephfs with "ceph-fuse -m 10.222.5.229:6789 --client-mount /client_one /mnt/test",I got ceph-client crash when I run a few hours later. > >-16> 2016-08-18 18:37:54.134672 7fd552ffd700 3 client.214296 ll_flush 0x7fd5307e8520 10000478575 > -15> 2016-08-18 18:37:54.134717 7fd5128e2700 3 client.214296 ll_release (fh)0x7fd5307e8520 10000478575 > -14> 2016-08-18 18:37:54.134725 7fd5128e2700 5 client.214296 _release_fh 0x7fd5307e8520 mode 1 on 10000478575.head(faked_ino=0 ref=3 ll_ref=11030 cap_refs={1024=0,2048=0} open={1=1} mode=100644 size=12401/0 mtime=2016-08-17 13:49:59.382502 caps=pAsLsXsFscr(0=pAsLsXsFscr) objectset[10000478575 ts 0/0 objects 1 dirty_or_tx 0] parents=0x7fd55c0120d0 0x7fd55c011b30) > -13> 2016-08-18 18:37:54.136109 7fd551ffb700 3 client.214296 ll_getattr 1000047417f.head > -12> 2016-08-18 18:37:54.136118 7fd551ffb700 3 client.214296 ll_getattr 1000047417f.head = 0 > -11> 2016-08-18 18:37:54.136126 7fd551ffb700 3 client.214296 ll_forget 1000047417f 1 > -10> 2016-08-18 18:37:54.136133 7fd551ffb700 3 client.214296 ll_lookup 0x7fd55c0108d0 2016 > -9> 2016-08-18 18:37:54.136140 7fd551ffb700 3 client.214296 ll_lookup 0x7fd55c0108d0 2016 -> 0 (10000474182) > -8> 2016-08-18 18:37:54.136148 7fd551ffb700 3 client.214296 ll_forget 1000047417f 1 > -7> 2016-08-18 18:37:54.136181 7fd5527fc700 3 client.214296 ll_getattr 10000474182.head > -6> 2016-08-18 18:37:54.136189 7fd5527fc700 3 client.214296 ll_getattr 10000474182.head = 0 > -5> 2016-08-18 18:37:54.136735 7fd550c92700 2 -- 10.155.2.5:0/1557134465 >> 10.155.2.5:6820/4511 pipe(0x7fd54c012ef0 sd=2 :48226 s=2 pgs=107 cs=1 l=1 c=0x7fd54c0141b0).reader couldn't read tag, (0) Success > -4> 2016-08-18 18:37:54.136792 7fd550c92700 2 -- 10.155.2.5:0/1557134465 >> 10.155.2.5:6820/4511 pipe(0x7fd54c012ef0 sd=2 :48226 s=2 pgs=107 cs=1 l=1 c=0x7fd54c0141b0).fault (0) Success > -3> 2016-08-18 18:37:54.136950 7fd56bff7700 1 client.214296.objecter ms_handle_reset on osd.5 > -2> 2016-08-18 18:37:54.136967 7fd56bff7700 1 -- 10.155.2.5:0/1557134465 mark_down 0x7fd54c0141b0 -- pipe dne > -1> 2016-08-18 18:37:54.137054 7fd56bff7700 1 -- 10.155.2.5:0/1557134465 --> 10.155.2.5:6820/4511 -- osd_op(client.214296.0:630732 4.a8ddcaa5 10000493bde.00000000 [write 0~12401] snapc 1=[] RETRY=1 ondisk+retry+write+known_if_redirected e836) v7 -- ?+0 0x7fd55ca2ff40 con 0x7fd55ca6d710 > 0> 2016-08-18 18:37:54.141233 7fd5527fc700 -1 *** Caught signal (Segmentation fault) ** > in thread 7fd5527fc700 thread_name:ceph-fuse > > ceph version 10.2.2 (45107e21c568dd033c2f0a3107dec8f0b0e58374) > 1: (()+0x29eeda) [0x7fd57878feda] > 2: (()+0xf130) [0x7fd577505130] > 3: (Client::get_root_ino()+0x10) [0x7fd57868be60] > 4: (CephFuse::Handle::make_fake_ino(inodeno_t, snapid_t)+0x18d) [0x7fd57868992d] > 5: (()+0x199261) [0x7fd57868a261] > 6: (()+0x164b5) [0x7fd5780a64b5] > 7: (()+0x16bdb) [0x7fd5780a6bdb] > 8: (()+0x13471) [0x7fd5780a3471] > 9: (()+0x7df5) [0x7fd5774fddf5] > 10: (clone()+0x6d) [0x7fd5763e61ad] > >But when I mount the cephfs with "ceph-fuse -m 10.222.5.229:6789 /mnt/test", ceph fuse client runs all right during few days. >I do not think the problem is related with the 0.94.3 OSD. >Does someone encouter the same problem with cephfs10.2.2? > > > > > >


 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux