I know that snapshots on Cephfs are experimental and that a known issue exists with multiple filesystems on one pool but I was surprised at the result of the following: I attempted to take a snapshot of a directory in a pool with a single fs on our properly configured Luminous cluster. I found that the files in the the .snap directory that I had just updated in order to test a restore were unreadable if opened with and editor like VI or simply were identical to the current version of the file when copied back making the whole snapshot operation unusable. I considered the whole method of taking a snapshot to be very straightforward but perhaps I am doing something wrong or is this behavior to be expected ? Thanks. Paul Kunicki Systems Manager SproutLoud Media Networks, LLC. 954-476-6211 ext. 144 pkunicki@xxxxxxxxxxxxxx www.sproutloud.com • • • The information contained in this communication is intended solely for the use of the individual or entity to whom it is addressed and for others authorized to receive it. It may contain confidential or legally privileged information. If you are not the intended recipient, you are hereby notified that any disclosure, copying, distribution, or taking any action in reliance on these contents is strictly prohibited and may be unlawful. In the event the recipient or recipients of this communication are under a non-disclosure agreement, any and all information discussed during phone calls and online presentations fall under the agreements signed by both parties. If you received this communication in error, please notify us immediately by responding to this e-mail. _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com