Re: Rebuilding/recreating CephFS journal?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 5/27/16, 3:23 PM, "Gregory Farnum" <gfarnum@xxxxxxxxxx> wrote:

>On Fri, May 27, 2016 at 2:22 PM, Stillwell, Bryan J
><Bryan.Stillwell@xxxxxxxxxxx> wrote:
>> Here's the full 'ceph -s' output:
>>
>> # ceph -s
>>     cluster c7ba6111-e0d6-40e8-b0af-8428e8702df9
>>      health HEALTH_ERR
>>             mds rank 0 is damaged
>>             mds cluster is degraded
>>      monmap e5: 3 mons at
>> {b3=172.24.88.53:6789/0,b4=172.24.88.54:6789/0,lira=172.24.88.20:6789/0}
>>             election epoch 320, quorum 0,1,2 lira,b3,b4
>>       fsmap e287: 0/1/1 up, 1 up:standby, 1 damaged
>>      osdmap e35262: 21 osds: 21 up, 21 in
>>             flags sortbitwise
>>       pgmap v10096597: 480 pgs, 4 pools, 23718 GB data, 5951 kobjects
>>             35758 GB used, 11358 GB / 47116 GB avail
>>                  479 active+clean
>>                    1 active+clean+scrubbing+deep
>
>Yeah, you should just need to mark mds 0 as repaired at this point.

Thanks Greg!  I ran 'ceph mds repaired 0' and it's working again!

Bryan

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux