-Mykola
On Tue, Feb 2, 2016 at 8:42 PM, Zhao Xu <xuzh.fdu@xxxxxxxxx> wrote:
Thank you Mykola. The issue is that I/we strongly suggested to add OSD for many times, but we are not the decision maker.For now, I just want to mount the ceph drive again, even in read only mode, so that they can read the data. Any idea on how to achieve this?Thanks,
XOn Tue, Feb 2, 2016 at 9:57 AM, Mykola Dvornik <mykola.dvornik@xxxxxxxxx> wrote:I would strongly(!) suggest you to add few more OSDs to cluster before things get worse / corrupted.-Mykola
On Tue, Feb 2, 2016 at 6:45 PM, Zhao Xu <xuzh.fdu@xxxxxxxxx> wrote:
Hi All,Recently our ceph storage is running at low performance. Today, we can not write to the folder. We tried to unmount the ceph storage then to re-mount it, however, we can not even mount it now:# mount -v -t ceph igc-head,is1,i1,i2,i3:6789:/ /mnt/igcfs/ -o name=admin,secretfile=/etc/admin.secretparsing options: rw,name=admin,secretfile=/etc/admin.secretmount error 5 = Input/output errorPreviously there are some nearly full osd, so we did the "ceph osd reweight-by-utilization" to rebalance the usage. The ceph health is not ideal but it should still alive. Please help me to mount the disk again.[root@igc-head ~]# ceph -scluster debdcfe9-20d3-404b-921c-2210534454e1health HEALTH_WARN39 pgs degraded39 pgs stuck degraded3 pgs stuck inactive332 pgs stuck unclean39 pgs stuck undersized39 pgs undersized48 requests are blocked > 32 secrecovery 129755/8053623 objects degraded (1.611%)recovery 965837/8053623 objects misplaced (11.993%)mds0: Behind on trimming (455/30)clock skew detected on mon.i1, mon.i2, mon.i3monmap e1: 5 mons at {i1=10.1.10.11:6789/0,i2=10.1.10.12:6789/0,i3=10.1.10.13:6789/0,igc-head=10.1.10.1:6789/0,is1=10.1.10.100:6789/0}election epoch 1314, quorum 0,1,2,3,4 igc-head,i1,i2,i3,is1mdsmap e1602: 1/1/1 up {0=igc-head=up:active}osdmap e8007: 17 osds: 17 up, 17 in; 298 remapped pgspgmap v5726326: 1088 pgs, 3 pools, 7442 GB data, 2621 kobjects22228 GB used, 18652 GB / 40881 GB avail129755/8053623 objects degraded (1.611%)965837/8053623 objects misplaced (11.993%)755 active+clean293 active+remapped31 active+undersized+degraded5 active+undersized+degraded+remapped3 undersized+degraded+peered1 active+clean+scrubbing[root@igc-head ~]# ceph osd treeID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY-1 39.86992 root default-2 18.14995 host is10 3.62999 osd.0 up 1.00000 1.000001 3.62999 osd.1 up 1.00000 1.000002 3.62999 osd.2 up 1.00000 1.000003 3.62999 osd.3 up 1.00000 1.000004 3.62999 osd.4 up 1.00000 1.00000-3 7.23999 host i15 1.81000 osd.5 up 0.44101 1.000006 1.81000 osd.6 up 0.40675 1.000007 1.81000 osd.7 up 0.60754 1.000008 1.81000 osd.8 up 0.50868 1.00000-4 7.23999 host i29 1.81000 osd.9 up 0.54956 1.0000010 1.81000 osd.10 up 0.44815 1.0000011 1.81000 osd.11 up 0.53262 1.0000012 1.81000 osd.12 up 0.47197 1.00000-5 7.23999 host i313 1.81000 osd.13 up 0.55557 1.0000014 1.81000 osd.14 up 0.65874 1.0000015 1.81000 osd.15 up 0.49663 1.0000016 1.81000 osd.16 up 0.50136 1.00000Thanks,X
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com