Ceph OSD: Memory Leak problem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I installed a ceph cluster with 2 MON, 1 MDS and 10 OSDs.

While performing the rados put operation to put objects in ceph cluster I am getting the OSD errors as follows:

2015-11-28 23:02:03.276821 7f7f5affb700  0 -- 10.176.128.135:0/1009266 >> 10.176.128.136:6800/22824 pipe(0x7f7f6000e190 sd=6 :0 s=1 pgs=0 cs=0 l=1 c=0x7f7f60012430).fault

According to the comments in Bug #3883 I restarted the corresponding OSD (10.176.128.135) but it is not working for me.

Also, I observe that during the operation some of the OSDs go down and after sometime come up automatically.

Following is the output of OSD tree map.

ID  WEIGHT  TYPE NAME            UP/DOWN REWEIGHT PRIMARY-AFFINITY
 -1 0.41992 root default                                          
 -2 0.03999     host ceph-node2                                   
  0 0.03999         osd.0             up  1.00000          1.00000
 -3 0.04999     host ceph-node4                                   
  1 0.04999         osd.1             up  1.00000          1.00000
 -4 0.03999     host ceph-node1                                   
  2 0.03999         osd.2             up  1.00000          1.00000
 -5 0.04999     host ceph-node6                                   
  3 0.04999         osd.3           down        0          1.00000
 -6 0.03000     host ceph-node5                                   
  4 0.03000         osd.4             up  1.00000          1.00000
 -7 0.04999     host ceph-node7                                   
  5 0.04999         osd.5             up  1.00000          1.00000
 -8 0.03999     host ceph-node8                                   
  6 0.03999         osd.6             up  1.00000          1.00000
 -9 0.03999     host ceph-node9                                   
  7 0.03999         osd.7             up  1.00000          1.00000
-10 0.07999     host ceph-node10                                  
  8 0.03999         osd.8             up  1.00000          1.00000
  9 0.03999         osd.9             up  1.00000          1.00000


Can someone help me with this issue?





Thanks & Regards
 
Prasad Pande

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux