Problem with Ceph 0.72 on FC20

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
I'm trying to deploy Ceph 0.72.2 on Fedora 20, but having some issues.
I have tried compiling Ceph myself as well install rpms from http://gitbuilder.ceph.com/ceph-rpm-fedora20-x86_64-basic/ref/emperor/ with the same result: my OSDs are dying 15 minutes after being deployed.
This server has fresh FC20 installation (minimal install), Ceph packages were installed using:

rpm -Uvh --replacepkgs --force http://gitbuilder.ceph.com/ceph-rpm-fedora20-x86_64-basic/ref/emperor/noarch/ceph-release-1-0.fc20.noarch.rpm
yum -y install ceph


I'm using "ceph-deploy" v1.3.4 to deploy the cluster, here are commands I'm running to deploy the cluster:

ceph-deploy new file-0a0180201e
ceph-deploy mon create
ceph-deploy gatherkeys file-0a01801e
ceph-deploy osd create file-0a01801e:/dev/sdc file-0a01801e:/dev/sdd file-0a01801e:/dev/sde file-0a01801e:/dev/sdf file-0a01801e:/dev/sdg file-0a01801e:/dev/sdh --zap


For 15 minutes all seems to be OK and then (excerpt from the ceph.log file):

2014-01-23 00:29:24.371184 mon.0 10.1.128.30:6789/0 123 : [INF] osdmap e45: 6 osds: 6 up, 6 in
2014-01-23 00:29:24.411854 mon.0 10.1.128.30:6789/0 124 : [INF] pgmap v51: 192 pgs: 15 peering, 40 stale+active+remapped, 26 stale+active+degraded, 18 remapped+peering, 20 stale+active+replay+remapped, 14 stale+active+repl
ay+degraded, 40 stale+active+degraded+remapped, 19 stale+active+replay+degraded+remapped; 0 bytes data, 170 MB used, 4605 GB / 4605 GB avail
2014-01-23 00:44:23.363641 mon.0 10.1.128.30:6789/0 125 : [INF] osd.4 marked down after no pg stats for 902.107940seconds
2014-01-23 00:44:23.454388 mon.0 10.1.128.30:6789/0 126 : [INF] osdmap e46: 6 osds: 5 up, 6 in
2014-01-23 00:44:23.499186 mon.0 10.1.128.30:6789/0 127 : [INF] pgmap v52: 192 pgs: 15 stale+peering, 40 stale+active+remapped, 26 stale+active+degraded, 20 stale+active+replay+remapped, 18 stale+remapped+peering, 14 stale
+active+replay+degraded, 40 stale+active+degraded+remapped, 19 stale+active+replay+degraded+remapped; 0 bytes data, 170 MB used, 4605 GB / 4605 GB avail
2014-01-23 00:49:23.511763 mon.0 10.1.128.30:6789/0 128 : [INF] osd.4 out (down for 300.058608)
2014-01-23 00:49:23.620808 mon.0 10.1.128.30:6789/0 129 : [INF] osdmap e47: 6 osds: 5 up, 5 in
2014-01-23 00:49:23.672991 mon.0 10.1.128.30:6789/0 130 : [INF] pgmap v53: 192 pgs: 15 stale+peering, 40 stale+active+remapped, 26 stale+active+degraded, 20 stale+active+replay+remapped, 18 stale+remapped+peering, 14 stale
+active+replay+degraded, 40 stale+active+degraded+remapped, 19 stale+active+replay+degraded+remapped; 0 bytes data, 136 MB used, 3684 GB / 3684 GB avail
2014-01-23 00:49:43.673744 mon.0 10.1.128.30:6789/0 131 : [INF] osd.0 marked down after no pg stats for 900.332388seconds
2014-01-23 00:49:43.673837 mon.0 10.1.128.30:6789/0 132 : [INF] osd.1 marked down after no pg stats for 900.332388seconds
2014-01-23 00:49:43.673866 mon.0 10.1.128.30:6789/0 133 : [INF] osd.2 marked down after no pg stats for 900.332388seconds
2014-01-23 00:49:43.673929 mon.0 10.1.128.30:6789/0 134 : [INF] osd.3 marked down after no pg stats for 900.332388seconds
2014-01-23 00:49:43.760811 mon.0 10.1.128.30:6789/0 135 : [INF] osdmap e48: 6 osds: 1 up, 5 in
2014-01-23 00:49:43.812961 mon.0 10.1.128.30:6789/0 136 : [INF] pgmap v54: 192 pgs: 15 stale+peering, 40 stale+active+remapped, 26 stale+active+degraded, 20 stale+active+replay+remapped, 18 stale+remapped+peering, 14 stale
+active+replay+degraded, 40 stale+active+degraded+remapped, 19 stale+active+replay+degraded+remapped; 0 bytes data, 136 MB used, 3684 GB / 3684 GB avail
2014-01-23 00:49:43.888011 mon.0 10.1.128.30:6789/0 137 : [INF] osdmap e49: 6 osds: 1 up, 5 in
2014-01-23 00:49:43.937904 mon.0 10.1.128.30:6789/0 138 : [INF] pgmap v55: 192 pgs: 15 stale+peering, 40 stale+active+remapped, 26 stale+active+degraded, 20 stale+active+replay+remapped, 18 stale+remapped+peering, 14 stale
+active+replay+degraded, 40 stale+active+degraded+remapped, 19 stale+active+replay+degraded+remapped; 0 bytes data, 136 MB used, 3684 GB / 3684 GB avail
2014-01-23 00:54:43.950475 mon.0 10.1.128.30:6789/0 139 : [INF] osd.0 out (down for 300.190819)
2014-01-23 00:54:43.950521 mon.0 10.1.128.30:6789/0 140 : [INF] osd.1 out (down for 300.190818)
2014-01-23 00:54:43.950547 mon.0 10.1.128.30:6789/0 141 : [INF] osd.2 out (down for 300.190818)
2014-01-23 00:54:43.950601 mon.0 10.1.128.30:6789/0 142 : [INF] osd.3 out (down for 300.190817)
2014-01-23 00:54:44.060024 mon.0 10.1.128.30:6789/0 143 : [INF] osdmap e50: 6 osds: 1 up, 1 in

Can anyone help? Am I missing some commands?

Best regards,
PK

--
Paul Kilar
meZocliq
1700 Broadway, Suite 3100
New York, NY 10019
Phone: (347) 817-6362
begin:vcard
fn:Paul Kilar
n:Kilar;Paul
adr:Suite 3100;;1700 Broadway;New York;NY;10019;USA
email;internet:Paul.Kilar@xxxxxxxxxxxx
tel;work:(347) 817-6362
x-mozilla-html:TRUE
version:2.1
end:vcard

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux