Re: PGs lost from cephfs data pool, how to determine which files to restore from backup?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Greg...



I've had to force recreate some PGs on my cephfs data pool due to some
cascading disk failures in my homelab cluster. Is there a way to easily
determine which files I need to restore from backup? My metadata pool is
completely intact.
Assuming you're on Jewel, run a recursive "scrub" on the MDS root via
the admin socket, and all the missing files should get logged in the
local MDS log.

The data file is stripped into different objects (according to the selected layout) that are then stored in different pgs and OSDs.

So, if a few pgs are lost, it mean that some files may be totally lost (if all of its objects were stored in the lost pgs) or that some files may only be partially lost (if some of its objects were stored in the losts pgs)

Does this method properly takes into account for the second mentioned case?



(I'm surprised at this point to discover we don't seem to have any
documentation about how scrubbing works. It's a regular admin socket
command and "ceph daemon mds.<name> help" should get you going where
you need.)

Indeed. Only found some references to it on John's CephFS update Feb 2016 talk: http://www.slideshare.net/JohnSpray1/cephfs-update-february-2016

Cheers
Goncalo
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux