Re: MDS damaged

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Commands that start "ceph daemon" take mds.<name> rather than a rank
(notes on terminology here:
http://docs.ceph.com/docs/master/cephfs/standby/).  The name is how
you would refer to the daemon from systemd, it's often set to the
hostname where the daemon is running by default.

John

On Wed, Oct 25, 2017 at 2:30 PM, Daniel Davidson
<danield@xxxxxxxxxxxxxxxx> wrote:
> I do have a problem with running the commands you mentioned to repair the
> mds:
>
> # ceph daemon mds.0 scrub_path
> admin_socket: exception getting command descriptions: [Errno 2] No such file
> or directory
> admin_socket: exception getting command descriptions: [Errno 2] No such file
> or directory
>
> Any idea why that is not working?
>
> Dan
>
>
>
> On 10/25/2017 06:45 AM, Daniel Davidson wrote:
>>
>> John, thank you so much.  After doing the initial rados command you
>> mentioned it is back up and running.  It did complain about a bunch of files
>> which frankly are not important having duplicate inodes, but I will run
>> those repair and scrub commands you mentioned and get it back clean again.
>>
>> Dan
>>
>> On 10/25/2017 03:55 AM, John Spray wrote:
>>>
>>> On Tue, Oct 24, 2017 at 7:14 PM, Daniel Davidson
>>> <danield@xxxxxxxxxxxxxxxx> wrote:
>>>>
>>>> Our ceph system is having a problem.
>>>>
>>>> A few days a go we had a pg that was marked as inconsistent, and today I
>>>> fixed it with a:
>>>>
>>>> #ceph pg repair 1.37c
>>>>
>>>> then a file was stuck as missing so I did a:
>>>>
>>>> #ceph pg 1.37c mark_unfound_lost delete
>>>> pg has 1 objects unfound and apparently lost marking
>>>
>>> OK, so "fixed" might be a bit of an overstatement here: while the PG
>>> is considered healthy, from CephFS's point of view what happened was
>>> that some of its metadata just got blown away.
>>>
>>> There are some (most) objects that CephFS can do without (it will just
>>> EIO when you try to read that file/dir), but there are some that are
>>> essential and will cause a whole MDS rank to be damaged (unstartable)
>>> -- that's what's happened in your case.
>>>
>>>> That fixed the unfound file problem and all the pgs went active+clean.
>>>> A
>>>> few minutes later though, the FS seemed to pause and the MDS started
>>>> giving
>>>> errors.
>>>>
>>>> # ceph -w
>>>>      cluster 7bffce86-9d7b-4bdf-a9c9-67670e68ca77
>>>>       health HEALTH_ERR
>>>>              mds rank 0 is damaged
>>>>              mds cluster is degraded
>>>>              noscrub,nodeep-scrub flag(s) set
>>>>       monmap e3: 4 mons at
>>>>
>>>> {ceph-0=172.16.31.1:6789/0,ceph-1=172.16.31.2:6789/0,ceph-2=172.16.31.3:6789/0,ceph-3=172.16.31.4:6789/0}
>>>>              election epoch 652, quorum 0,1,2,3
>>>> ceph-0,ceph-1,ceph-2,ceph-3
>>>>        fsmap e121409: 0/1/1 up, 4 up:standby, 1 damaged
>>>>       osdmap e35220: 32 osds: 32 up, 32 in
>>>>              flags noscrub,nodeep-scrub,sortbitwise,require_jewel_osds
>>>>        pgmap v28398840: 1536 pgs, 2 pools, 795 TB data, 329 Mobjects
>>>>              1595 TB used, 1024 TB / 2619 TB avail
>>>>                  1536 active+clean
>>>>
>>>> Looking into the logs when I try a:
>>>>
>>>> #ceph mds repaired 0
>>>>
>>>> 2017-10-24 12:01:27.354271 mds.0 172.16.31.3:6801/1949050374 75 :
>>>> cluster
>>>> [ERR] dir 607 object missing on disk; some files may be lost
>>>> (~mds0/stray7)
>>>>
>>>> Any ideas as for what to do next, I am stumped.
>>>
>>> So if this is really the only missing object, then it's your lucky
>>> day, you lost a stray directory which usually contain just deleted
>>> files (can contain something more important if you've had hardlinks
>>> where the original file was later deleted).
>>>
>>> The MDS goes damaged if it has a reference to a stay directory, but
>>> the directory object isn't found.  OTOH if there is no reference to
>>> the stray directory, it will happily recreate it for you.  So, you can
>>> do this:
>>> rados -p <your metadata pool> rmomapkey 100.00000000 stray7_head
>>>
>>> ...to prompt the MDS to recreate the stray directory (the arguments
>>> there are the magic internal names for ~mds0/stray7).
>>>
>>> Then, if that was the only damage, your MDS will come up after you run
>>> "ceph mds repaired 0".
>>>
>>> There will still be some inconsistency resulting from removing the
>>> stray dir, and possibly also from the disaster recovery tools that
>>> you've run since, so you'll want to do a "ceph daemon mds.<id>
>>> scrub_path / repair recursive".  This will probably output a bunch of
>>> messages to the cluster log about things that it is repairing. Then
>>> do "ceph daemon mds.<id> flush journal" to flush out the repairs it
>>> has made, and restart the MDS daemon one more time ("ceph mds fail
>>> 0").
>>>
>>> John
>>>
>>>> Dan
>>>>
>>>> _______________________________________________
>>>> ceph-users mailing list
>>>> ceph-users@xxxxxxxxxxxxxx
>>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux