Re: Node crash, filesytem not usable

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



What are some outputs of commands to show us the state of your cluster.  Most notable is `ceph status` but `ceph osd tree` would be helpful. What are the size of the pools in your cluster?  Are they all size=3 min_size=2?

On Fri, May 11, 2018 at 12:05 PM Daniel Davidson <danield@xxxxxxxxxxxxxxxx> wrote:
Hello,

Today we had a node crash, and looking at it, it seems there is a
problem with the RAID controller, so it is not coming back up, maybe
ever.  It corrupted the local filesytem for the ceph storage there.

The remainder of our storage (10.2.10) cluster is running, and it looks
to be repairing and our min_size is set to 2.  Normally, I would expect
that the system would keep running normally from and end user
perspective when this happens, but the system is down. All mounts that
were up when this started look to be stale, and new mounts give the
following error:

# mount -t ceph ceph-0:/ /test/ -o
name=admin,secretfile=/etc/ceph/admin.secret,noatime,_netdev,rbytes
mount error 5 = Input/output error

Any suggestions?

Dan

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux