cephfs snapshots

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



[re-adding ceph-users]

Yes, it can corrupt the metadata and require use of filesystem repair tools. I really don't recommend using snapshots except on toy clusters.

On Wednesday, June 22, 2016, Brady Deetz <bdeetz@xxxxxxxxx> wrote:

Snapshots would be excellent for a number of fairly obvious reasons. Are any of the know issues with snapshots issues that result in the loss of non-snapshot data or a cluster?

On Jun 22, 2016 2:16 PM, "Gregory Farnum" <gfarnum@xxxxxxxxxx> wrote:
On Wednesday, June 22, 2016, Kenneth Waegeman <kenneth.waegeman@xxxxxxxx> wrote:
Hi all,

In Jewel ceph fs snapshots are still experimental. Does someone has a clue when this would become stable, or how experimental this is ?

We're not sure yet. Probably it will follow stable multi-MDS; we're thinking about redoing some of the core snapshot pieces still. :/

It's still pretty experimental in Jewel. Shen had been working on this and I think it often works, but tends to fall apart under the failure of other components (eg, restarting an MDS while snapshot work is happening).

s/Shen/Zheng/
Silly autocorrect!


 
-Greg 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux