Re: Cephfs Snapshots - Usable in Single FS per Pool Scenario ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jan 30, 2018 at 12:50 AM, Paul Kunicki <pkunicki@xxxxxxxxxxxxxx> wrote:
> I know that snapshots on Cephfs are experimental and that a known
> issue exists with multiple filesystems on one pool but I was surprised
> at the result of the following:
>
> I attempted to take a snapshot of a directory in a pool with a single
> fs on our properly configured Luminous cluster. I found that the files
> in the the .snap directory that I had just updated in order to test a
> restore were unreadable if opened with and editor like VI or simply
> were identical to the current version of the file when copied back
> making the whole snapshot operation unusable.

Can you be more specific: what does "unreadable" mean?  An IO error?
A blank file?

A step-by-step reproducer would be helpful, doing `cat`s and `echo`s
to show what you're putting in and what's coming out.

John

>
> I considered the whole method of taking a snapshot to be very
> straightforward but perhaps I am doing something wrong or is this
> behavior to be expected ?
>
> Thanks.
>
>
>
>
> Paul Kunicki
> Systems Manager
> SproutLoud Media Networks, LLC.
> 954-476-6211 ext. 144
> pkunicki@xxxxxxxxxxxxxx
> www.sproutloud.com
>
>  •   •   •
>
>
>
> The information contained in this communication is intended solely for
> the use of the individual or entity to whom it is addressed and for
> others authorized to receive it. It may contain confidential or
> legally privileged information. If you are not the intended recipient,
> you are hereby notified that any disclosure, copying, distribution, or
> taking any action in reliance on these contents is strictly prohibited
> and may be unlawful. In the event the recipient or recipients of this
> communication are under a non-disclosure agreement, any and all
> information discussed during phone calls and online presentations fall
> under the agreements signed by both parties. If you received this
> communication in error, please notify us immediately by responding to
> this e-mail.
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux