Understanding Cephfs / how to have fs in line with OSD pool ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Cephers,

 

I’m trying to understand Cephfs design and specially how file system view reflect OSD pools in order to perform a backup / restore operation (VM context).

 

I’m able to backup and restore sucessfully OSDs but I have some issues with filesystem layer.

When I create files after backup time, after OSD restoration I’m still seeing these files even if they are not usable (since osd do not have knowledge of these).

 

I tried to shutdown/start fs before/after restoration but no change

Same no luck with journal export/import

 

I had a look at http://docs.ceph.com/docs/mimic/cephfs/disaster-recovery/ but since I do not really know what’s beneath all the tools/procedures, I don’t know how to start with this

 

I know there’s a really usefull feature called cephfs snapshot (.snap) but for some reasons I could not use it here.

 

Could someone kind enough to guide me ?

 

My goal is to restore datas and cephfs layer.

OSD restoration seems ok but how to make cephfs consistent with the OSD pool restoration ?

Is there some cache files somewhere on MDS to refresh ? (did not find any)

Or maybe something wrong with the OSD metadata pool linked to my cephfs fs but I don’t really know how to check

 

I’m using Ceph Mimic (ceph version 13.2.5 (cbff874f9007f1869bfd3821b7e33b2a6ffd4988) mimic (stable)) with 3 MDS / 1 active and bluestore

 

Thank you for your precious help !

 

Vincent

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux