Re: Cephfs Snapshots - Usable in Single FS per Pool Scenario ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for the fast reply. I started recording a session where I
unmounted and re-mounted the file system and could not duplicate the
issue. I am going to do some more testing and report back any relevant
findings. For now here are some details about our setup where files
contained in shapshots were either empty or contained non printable
contents once their original versions were modified.

[root@sl-util mnt]# cat /etc/redhat-release
CentOS Linux release 7.4.1708 (Core)
[root@sl-util mnt]# uname -a
Linux sl-util.sproutloud.com 3.10.0-693.2.2.el7.x86_64 #1 SMP Tue Sep
12 22:26:13 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

[root@sl-util mnt]# rpm -qa | grep ceph
centos-release-ceph-luminous-1.0-1.el7.centos.noarch
libcephfs2-12.2.2-0.el7.x86_64
ceph-common-12.2.2-0.el7.x86_64
python-cephfs-12.2.2-0.el7.x86_64

[root@sl-util mnt]# ceph -v
ceph version 12.2.2 (cf0baeeeeba3b47f9427c6c97e2144b094b7e5ba) luminous (stable)

[root@sl-util mnt]# ceph health
HEALTH_OK

[root@sl-util mnt]# ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]

[root@sl-util mnt]# ceph df detail
GLOBAL:
    SIZE       AVAIL      RAW USED     %RAW USED     OBJECTS
    63487G     63477G       11015M          0.02       19456
POOLS:
    NAME                ID     QUOTA OBJECTS     QUOTA BYTES     USED
     %USED     MAX AVAIL     OBJECTS     DIRTY     READ      WRITE
RAW USED
    cephfs_data         5      N/A               N/A
2858M         0        29680G       15733     15733     12452
37830        5716M
    cephfs_metadata     6      N/A               N/A
41865k         0        29680G        3723      3723       140
17799       83731k


[root@sl-util mnt]# mount -t ceph prod-ceph-mon-1:6789:/ /mnt/cephfs
-o name=admin,secretfile=/etc/ceph/client.admin.secret
[root@sl-util mnt]# mount
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /dev type devtmpfs
(rw,nosuid,size=8118464k,nr_inodes=2029616,mode=755)
securityfs on /sys/kernel/security type securityfs
(rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts
(rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup
(rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/memory type cgroup
(rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/blkio type cgroup
(rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup
(rw,nosuid,nodev,noexec,relatime,cpuacct,cpu)
cgroup on /sys/fs/cgroup/perf_event type cgroup
(rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/cpuset type cgroup
(rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup
(rw,nosuid,nodev,noexec,relatime,net_prio,net_cls)
cgroup on /sys/fs/cgroup/devices type cgroup
(rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/hugetlb type cgroup
(rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/freezer type cgroup
(rw,nosuid,nodev,noexec,relatime,freezer)
configfs on /sys/kernel/config type configfs (rw,relatime)
/dev/mapper/centos-root on / type xfs (rw,relatime,attr2,inode64,noquota)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs
(rw,relatime,fd=29,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=9892)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
nfsd on /proc/fs/nfsd type nfsd (rw,relatime)
/dev/sdl1 on /boot type xfs (rw,relatime,attr2,inode64,noquota)
/dev/mapper/centos-home on /home type xfs (rw,relatime,attr2,inode64,noquota)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
tmpfs on /run/user/10065 type tmpfs
(rw,nosuid,nodev,relatime,size=1626268k,mode=700,uid=10065,gid=10027)
tmpfs on /run/user/10058 type tmpfs
(rw,nosuid,nodev,relatime,size=1626268k,mode=700,uid=10058,gid=10027)
10.0.100.54:6789:/ on /mnt/cephfs type ceph
(rw,relatime,name=admin,secret=<hidden>,acl)

[root@sl-util mnt]# ceph fs set cephfs allow_new_snaps true
--yes-i-really-mean-it
enabled new snapshots




Paul Kunicki
Systems Manager
SproutLoud Media Networks, LLC.
954-476-6211 ext. 144
pkunicki@xxxxxxxxxxxxxx
www.sproutloud.com

 •   •   •



The information contained in this communication is intended solely for
the use of the individual or entity to whom it is addressed and for
others authorized to receive it. It may contain confidential or
legally privileged information. If you are not the intended recipient,
you are hereby notified that any disclosure, copying, distribution, or
taking any action in reliance on these contents is strictly prohibited
and may be unlawful. In the event the recipient or recipients of this
communication are under a non-disclosure agreement, any and all
information discussed during phone calls and online presentations fall
under the agreements signed by both parties. If you received this
communication in error, please notify us immediately by responding to
this e-mail.


On Tue, Jan 30, 2018 at 5:23 AM, John Spray <jspray@xxxxxxxxxx> wrote:
> On Tue, Jan 30, 2018 at 10:22 AM, John Spray <jspray@xxxxxxxxxx> wrote:
>> On Tue, Jan 30, 2018 at 12:50 AM, Paul Kunicki <pkunicki@xxxxxxxxxxxxxx> wrote:
>>> I know that snapshots on Cephfs are experimental and that a known
>>> issue exists with multiple filesystems on one pool but I was surprised
>>> at the result of the following:
>>>
>>> I attempted to take a snapshot of a directory in a pool with a single
>>> fs on our properly configured Luminous cluster. I found that the files
>>> in the the .snap directory that I had just updated in order to test a
>>> restore were unreadable if opened with and editor like VI or simply
>>> were identical to the current version of the file when copied back
>>> making the whole snapshot operation unusable.
>>
>> Can you be more specific: what does "unreadable" mean?  An IO error?
>> A blank file?
>>
>> A step-by-step reproducer would be helpful, doing `cat`s and `echo`s
>> to show what you're putting in and what's coming out.
>
> Oh, and please also specify what client you're using.  If you're using
> an old kernel client (e.g. the stock kernel of many LTS distros...)
> then that would be something to change.
>
> John
>
>> John
>>
>>>
>>> I considered the whole method of taking a snapshot to be very
>>> straightforward but perhaps I am doing something wrong or is this
>>> behavior to be expected ?
>>>
>>> Thanks.
>>>
>>>
>>>
>>>
>>> Paul Kunicki
>>> Systems Manager
>>> SproutLoud Media Networks, LLC.
>>> 954-476-6211 ext. 144
>>> pkunicki@xxxxxxxxxxxxxx
>>> www.sproutloud.com
>>>
>>>  •   •   •
>>>
>>>
>>>
>>> The information contained in this communication is intended solely for
>>> the use of the individual or entity to whom it is addressed and for
>>> others authorized to receive it. It may contain confidential or
>>> legally privileged information. If you are not the intended recipient,
>>> you are hereby notified that any disclosure, copying, distribution, or
>>> taking any action in reliance on these contents is strictly prohibited
>>> and may be unlawful. In the event the recipient or recipients of this
>>> communication are under a non-disclosure agreement, any and all
>>> information discussed during phone calls and online presentations fall
>>> under the agreements signed by both parties. If you received this
>>> communication in error, please notify us immediately by responding to
>>> this e-mail.
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux