Re: CephFS Snapshot questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



John,

Thanks for your answers. I have a clarification on my questions see below inline.

Bruce

 

From: John Spray <jspray@xxxxxxxxxx>
Date: Thursday, June 8, 2017 at 1:45 AM
To: "McFarland, Bruce" <Bruce.McFarland@xxxxxxxxxxxx>
Cc: "ceph-users@xxxxxxxxxxxxxx" <ceph-users@xxxxxxxxxxxxxx>
Subject: Re: [ceph-users] CephFS Snapshot questions

 

On Wed, Jun 7, 2017 at 11:46 PM, McFarland, Bruce

I have a couple of CephFS snapshot questions

 

-          Is there any functionality similar to rbd clone/flatten such that

the snapshot can be made writable?  Or is that as simple as copying the

.snap/<dirname> to another cluster?

 

No, there's no cloning.  You don't need another cluster though -- you

can "cp -r" your snapshot anywhere on any filesystem, and you'll end

up with fresh files that you can write to.

 

 

-          If the first object write since the snapid was created is a user

error how is that object recovered if it isn’t added to the snapid until

it’s 1st write after snapid creation?

 

Don't understand the question at all.  "user error"?

 

I think I’ve answered this for myself. The case would be a user’s first write to an object after the snap is created being an error they wanted to “fix” by restoring the object from the objects clone. So when the user writes the “error” to the object it is copied to the snap while it is also being written. The object can then be restored from the clone in this case where the first write to it is in error and it can be recovered from its clone which hadn’t been populated with that object until that write.  

 

-          If I want to clone the .snap/<dirname>/ and not all objects have

been written since .snap/<dirname>/ was created how do I know if or get all

objects into the snap if I wanted to move the snap to another cluster?

 

There's no concept of moving a snapshot between clusters.  If you're

just talking about doing a "cp -r" of the snapshot, then the MDS

should do the right thing in terms of blocking your reads on files

that have dirty data in client caches -- when we make a snapshot then

clients doing buffered writes are asked to flush those buffers.

 

There are 2 cases I’m wondering about here that I didn’t accurately describe. 1 would be data migration between clusters which might not be possible and 2 would be storing clones on a second cluster.

1.      Is it possible to snap a directory tree on its source cluster and then copy it to a new/different destination cluster? Would that be prohibited due to the snaps MDS being on the source cluster? I can see that being useful for migrating data/users between clusters, but that it might not be possible.

2.      I would expect this to be possible where a snap is created, it’s then compressed into a tarball, and that tarball is stored on a second cluster for any future DR at which point it’s copied back to the source cluster, extracted restoring directory tree to state at time of snap creation.

 

John

 

 

 

I might not be making complete sense yet and am in the process of testing to

see how CephFS snapshots behave.

 

 

 

 

_______________________________________________

ceph-users mailing list

 

 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux