=?gb18030?b?u9i4tKO6ICBnbHVzdGVyZnMgc2VnbWVudGF0?==?gb18030?q?ion_fault_in_rdma_mode?=

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, If there was only one client, there were not any problems even the traffic was very heavy. But if I used several clients to write the same volume, then I could see the segmentation fault. I used gdb to debug, but the performance was much lower than the previous test results, and we couldn't see the errors. We thought that the problem only occurred when multiple clients wrote the same volume with a very high performance (e.g., more than 1GiB/s each client).
------------------ 原始邮件 ------------------
发件人: "Ben Turner"<bturner@xxxxxxxxxx>
发送时间: 2017年11月5日(星期天) 凌晨3:00
收件人: "自由人"<21291285@xxxxxx>;
抄送: "gluster-users"<gluster-users@xxxxxxxxxxx>;
主题: Re: [Gluster-users] glusterfs segmentation fault in rdma mode
This looks like there could be some some problem requesting / leaking / whatever memory but without looking at the core its tought to tell for sure.   Note:

/usr/lib64/libglusterfs.so.0(_gf_msg_backtrace_nomem+0x78)[0x7f95bc54e618]

Can you open up a bugzilla and get us the core file to review?

-b

----- Original Message -----
> From: "自由人" <21291285@xxxxxx>
> To: "gluster-users" <gluster-users@xxxxxxxxxxx>
> Sent: Saturday, November 4, 2017 5:27:50 AM
> Subject: [Gluster-users] glusterfs segmentation fault in rdma mode
>
>
>
> Hi, All,
>
>
>
>
> I used Infiniband to connect all GlusterFS nodes and the clients. Previously
> I run IP over IB and everything was OK. Now I used rdma transport mode
> instead. And then I ran the traffic. After I while, the glusterfs process
> exited because of segmentation fault.
>
>
>
>
> Here were the messages when I saw segmentation fault:
>
> pending frames:
>
> frame : type(0) op(0)
>
> frame : type(0) op(0)
>
> frame : type(1) op(WRITE)
>
> frame : type(0) op(0)
>
> frame : type(0) op(0)
>
> frame : type(0) op(0)
>
> frame : type(0) op(0)
>
> frame : type(0) op(0)
>
> frame : type(0) op(0)
>
> frame : type(0) op(0)
>
> frame : type(0) op(0)
>
> frame : type(0) op(0)
>
> frame : type(0) op(0)
>
> frame : type(0) op(0)
>
> frame : type(0) op(0)
>
> frame : type(0) op(0)
>
> frame : type(0) op(0)
>
> frame : type(0) op(0)
>
> frame : type(0) op(0)
>
> frame : type(0) op(0)
>
> frame : type(0) op(0)
>
> frame : type(0) op(0)
>
> frame : type(0) op(0)
>
> frame : type(0) op(0)
>
> frame : type(0) op(0)
>
> frame : type(0) op(0)
>
> patchset: git:// git.gluster.org/glusterfs.git
>
> signal received: 11
>
> time of crash:
>
> 2017-11-01 11:11:23
>
> configuration details:
>
> argp 1
>
> backtrace 1
>
> dlfcn 1
>
> libpthread 1
>
> llistxattr 1
>
> setfsid 1
>
> spinlock 1
>
> epoll.h 1
>
> xattr.h 1
>
> st_atim.tv_nsec 1
>
> package-string: glusterfs 3.11.0
>
> /usr/lib64/ libglusterfs.so.0(_gf_msg_backtrace_nomem+0x78)[0x7f95bc54e618 ]
>
> /usr/lib64/ libglusterfs.so.0(gf_print_trace+0x324)[0x7f95bc557834 ]
>
> /lib64/ libc.so.6(+0x32510)[0x7f95bace2510 ]
>
> The client OS was CentOS 7.3. The server OS was CentOS 6.5. The GlusterFS
> version was 3.11.0 both in clients and servers. The Infiniband card was
> Mellanox. The Mellanox IB driver version was v4.1-1.0.2 (27 Jun 2017) both
> in clients and servers.
>
>
> Is rdma code stable for GlusterFS? Need I upgrade the IB driver or apply a
> patch?
>
> Thanks!
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx
> http://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux