Re: Urgent help needed for ceph storage "mount error 5 = Input/output error"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Try to mount with ceph-fuse. It worked for me when I've faced the same sort of issues you are now dealing with.

-Mykola


On Tue, Feb 2, 2016 at 8:42 PM, Zhao Xu <xuzh.fdu@xxxxxxxxx> wrote:
Thank you Mykola. The issue is that I/we strongly suggested to add OSD for many times, but we are not the decision maker.
For now, I just want to mount the ceph drive again, even in read only mode, so that they can read the data. Any idea on how to achieve this?

Thanks,
X

On Tue, Feb 2, 2016 at 9:57 AM, Mykola Dvornik <mykola.dvornik@xxxxxxxxx> wrote:
I would strongly(!) suggest you to add few more OSDs to cluster before things get worse / corrupted.

-Mykola


On Tue, Feb 2, 2016 at 6:45 PM, Zhao Xu <xuzh.fdu@xxxxxxxxx> wrote:
Hi All,
  Recently our ceph storage is running at low performance. Today, we can not write to the folder. We tried to unmount the ceph storage then to re-mount it, however, we can not even mount it now:

# mount -v -t  ceph igc-head,is1,i1,i2,i3:6789:/ /mnt/igcfs/ -o name=admin,secretfile=/etc/admin.secret
parsing options: rw,name=admin,secretfile=/etc/admin.secret
mount error 5 = Input/output error

  Previously there are some nearly full osd, so we did the "ceph osd reweight-by-utilization" to rebalance the usage. The ceph health is not ideal but it should still alive. Please help me to mount the disk again. 

[root@igc-head ~]# ceph -s
    cluster debdcfe9-20d3-404b-921c-2210534454e1
     health HEALTH_WARN
            39 pgs degraded
            39 pgs stuck degraded
            3 pgs stuck inactive
            332 pgs stuck unclean
            39 pgs stuck undersized
            39 pgs undersized
            48 requests are blocked > 32 sec
            recovery 129755/8053623 objects degraded (1.611%)
            recovery 965837/8053623 objects misplaced (11.993%)
            mds0: Behind on trimming (455/30)
            clock skew detected on mon.i1, mon.i2, mon.i3
            election epoch 1314, quorum 0,1,2,3,4 igc-head,i1,i2,i3,is1
     mdsmap e1602: 1/1/1 up {0=igc-head=up:active}
     osdmap e8007: 17 osds: 17 up, 17 in; 298 remapped pgs
      pgmap v5726326: 1088 pgs, 3 pools, 7442 GB data, 2621 kobjects
            22228 GB used, 18652 GB / 40881 GB avail
            129755/8053623 objects degraded (1.611%)
            965837/8053623 objects misplaced (11.993%)
                 755 active+clean
                 293 active+remapped
                  31 active+undersized+degraded
                   5 active+undersized+degraded+remapped
                   3 undersized+degraded+peered
                   1 active+clean+scrubbing

[root@igc-head ~]# ceph osd tree
ID WEIGHT   TYPE NAME      UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1 39.86992 root default                                     
-2 18.14995     host is1                                     
 0  3.62999         osd.0       up  1.00000          1.00000 
 1  3.62999         osd.1       up  1.00000          1.00000 
 2  3.62999         osd.2       up  1.00000          1.00000 
 3  3.62999         osd.3       up  1.00000          1.00000 
 4  3.62999         osd.4       up  1.00000          1.00000 
-3  7.23999     host i1                                      
 5  1.81000         osd.5       up  0.44101          1.00000 
 6  1.81000         osd.6       up  0.40675          1.00000 
 7  1.81000         osd.7       up  0.60754          1.00000 
 8  1.81000         osd.8       up  0.50868          1.00000 
-4  7.23999     host i2                                      
 9  1.81000         osd.9       up  0.54956          1.00000 
10  1.81000         osd.10      up  0.44815          1.00000 
11  1.81000         osd.11      up  0.53262          1.00000 
12  1.81000         osd.12      up  0.47197          1.00000 
-5  7.23999     host i3                                      
13  1.81000         osd.13      up  0.55557          1.00000 
14  1.81000         osd.14      up  0.65874          1.00000 
15  1.81000         osd.15      up  0.49663          1.00000 
16  1.81000         osd.16      up  0.50136          1.00000 


Thanks,
X

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux