CephFS snapshot preferred behaviors

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



All,
I spent several days last week examining our current snapshot
implementation and thinking about how it could be improved. As part of
that ongoing effort, I'd love to know what user expectations are about
behavior.
(I'm going to open up a ceph-devel thread on the implementation
details shortly, and wrote a doc with some info, see
https://github.com/ceph/ceph/pull/10436)

Some specific questions:
* Right now, we allow users to rename snapshots. (This is newish, so
you may not be aware of it if you've been using snapshots for a
while.) Is that an important ability to preserve?

* If you create a snapshot at "/1/2/foo", you can't delete "/1/2/foo"
without removing the snapshot. Is that a good interface? Would you
prefer we let you delete foo, and just link the snapshot in elsewhere?
If so, what should we do, create a "/1/2/.snap/foo" link to it? Link
it in from the root? Something else?

* If you create a hard link at "/1/2/foo/bar" pointing at "/1/3/bar"
and then take a snapshot at "/1/2/foo", it *will not* capture the file
data in bar. Is that okay? Doing otherwise is *exceedingly* difficult.

* Creating snapshots is really fast right now: you issue a mkdir, the
MDS commits one log entry to disk, and we return that it's done. Part
of that is because we asynchronously notify clients about the new
snapshot, and they and the MDS asynchronously flush out data for the
snapshot. Is that good? There's a trade-off with durability (buffered
data which you might expect to be in the snapshot gets lost if a
client crashes, despite the snapshot "completing") and with external
communication channels (you could have multiple clients write data
they want in the snapshot, take the snapshot, then have a client write
data it *doesn't* want in the snapshot get written quickly enough to
be included as part of the snap). Would you rather creating a snapshot
be slower but force a synchronous write-out of all data to disk?

Thanks!
-Greg
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux