Re: PGs lost from cephfs data pool, how to determine which files to restore from backup?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Sep 8, 2016 at 3:42 PM, John Spray <jspray@xxxxxxxxxx> wrote:
> On Thu, Sep 8, 2016 at 2:06 AM, Gregory Farnum <gfarnum@xxxxxxxxxx> wrote:
>> On Wed, Sep 7, 2016 at 7:44 AM, Michael Sudnick
>> <michael.sudnick@xxxxxxxxx> wrote:
>>> I've had to force recreate some PGs on my cephfs data pool due to some
>>> cascading disk failures in my homelab cluster. Is there a way to easily
>>> determine which files I need to restore from backup? My metadata pool is
>>> completely intact.
>>
>> Assuming you're on Jewel, run a recursive "scrub" on the MDS root via
>> the admin socket, and all the missing files should get logged in the
>> local MDS log.
>
> This isn't quite accurate -- the forward scrub is only checking for
> the first object in the file (which contains the backtrace), so it
> won't identify any files where other objects may have been in the lost
> PGs.
>
> Also, it turns out that the MDS doesn't actually log anything in this
> case; the issue is noted in the scrub result object for the inode, but
> that doesn't go anywhere unless you were explicitly doing "scrub_path
> /<the file>" in which case you get the detailed results on the command
> line.
>
> Anyway -- currently there isn't an efficient tool for answering the
> question "which files have objects in this PG?".  The only way to work
> it out is to scan through every possible object in every file in the
> system.  You can sort of do this by writing a script, but it'll be
> very slow if you have to call into "ceph osd map" for each object ID.
> It may well be faster to do a full restore from your backup.

Follow up... we've talked about writing this tool before and it felt
like the time had come, so this will go into Kraken:
https://github.com/ceph/ceph/pull/11026

John

>
>> (I'm surprised at this point to discover we don't seem to have any
>> documentation about how scrubbing works. It's a regular admin socket
>> command and "ceph daemon mds.<name> help" should get you going where
>> you need.)
>> -Greg
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux