CephFS 16.2.10 problem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Good evening!

The following problem occurred.
There is a cluster ceph 16.2.10
The cluster was operating normally on Friday. Shut down cluster:
-Excluded all clients
Executed commands:
ceph osd set noout
ceph osd set nobackfill
ceph osd set norecover
ceph osd set norebalance
ceph osd set nodown
ceph osd set pause
Turned off the cluster, checked server maintenance.
Enabled cluster. He gathered himself, found all the nodes, and here the problem began. After all OSD went up and all pg became available, cephfs refused to start.
Now mds are in the replay status, and do not go to the ready status.
Previously, one of them was in the replay (laggy) status, but we executed command:  ceph config set mds mds_wipe_sessions true
After that, mds switched to the status of replays, the third in standby status started, and mds crashes with an error stopped.
But cephfs is still unavailable.
What else can we do?
The cluster is very large, almost 200 million files.


Best regards


A.Tsivinsky

e-mail: alexey.tsivinsky@xxxxxxxxxxxxxxxxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux