Re: CephFS damaged and cannot recover

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jun 19, 2019 at 9:19 AM Wei Jin <wjin.cn@xxxxxxxxx> wrote:
>
> There are plenty of data in this cluster (2PB), please help us, thx.
> Before doing this dangerous
> operations(http://docs.ceph.com/docs/master/cephfs/disaster-recovery-experts/#disaster-recovery-experts);
> , any suggestions?
>
> Ceph version: 12.2.12
>
> ceph fs status:
>
> cephfs - 1057 clients
> ======
> +------+---------+-------------+----------+-------+-------+
> | Rank |  State  |     MDS     | Activity |  dns  |  inos |
> +------+---------+-------------+----------+-------+-------+
> |  0   |  failed |             |          |       |       |
> |  1   | resolve | n31-023-214 |          |    0  |    0  |
> |  2   | resolve | n31-023-215 |          |    0  |    0  |
> |  3   | resolve | n31-023-218 |          |    0  |    0  |
> |  4   | resolve | n31-023-220 |          |    0  |    0  |
> |  5   | resolve | n31-023-217 |          |    0  |    0  |
> |  6   | resolve | n31-023-222 |          |    0  |    0  |
> |  7   | resolve | n31-023-216 |          |    0  |    0  |
> |  8   | resolve | n31-023-221 |          |    0  |    0  |
> |  9   | resolve | n31-023-223 |          |    0  |    0  |
> |  10  | resolve | n31-023-225 |          |    0  |    0  |
> |  11  | resolve | n31-023-224 |          |    0  |    0  |
> |  12  | resolve | n31-023-219 |          |    0  |    0  |
> |  13  | resolve | n31-023-229 |          |    0  |    0  |
> +------+---------+-------------+----------+-------+-------+
> +-----------------+----------+-------+-------+
> |       Pool      |   type   |  used | avail |
> +-----------------+----------+-------+-------+
> | cephfs_metadata | metadata | 2843M | 34.9T |
> |   cephfs_data   |   data   | 2580T |  731T |
> +-----------------+----------+-------+-------+
>
> +-------------+
> | Standby MDS |
> +-------------+
> | n31-023-227 |
> | n31-023-226 |
> | n31-023-228 |
> +-------------+

Are there failovers occurring while all the ranks are in up:resolve?
MDS logs at high debug level would be helpful.

-- 
Patrick Donnelly, Ph.D.
He / Him / His
Senior Software Engineer
Red Hat Sunnyvale, CA
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux