Re: Limitations of ceph fs snapshot mirror for read-only folders?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



OK, reconstructed with another example:

-- source file system --

0|0[root@gw-1 ~]# find /data/cephfs-2/test/x2 | xargs stat
 File: /data/cephfs-2/test/x2
 Size: 1               Blocks: 0          IO Block: 65536  directory
Device: 2ch/44d Inode: 1099840816759  Links: 3
Access: (2440/dr--r-S---)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2022-01-27 16:24:15.627783470 +0100
Modify: 2022-01-27 16:24:22.001750514 +0100
Change: 2022-01-27 16:24:51.294599055 +0100
Birth: -
 File: /data/cephfs-2/test/x2/y2
 Size: 1               Blocks: 0          IO Block: 65536  directory
Device: 2ch/44d Inode: 1099840816760  Links: 2
Access: (2440/dr--r-S---)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2022-01-27 16:24:22.001750514 +0100
Modify: 2022-01-27 16:24:27.712720985 +0100
Change: 2022-01-27 16:24:51.307598988 +0100
Birth: -
 File: /data/cephfs-2/test/x2/y2/z
 Size: 0               Blocks: 0          IO Block: 4194304 regular empty file
Device: 2ch/44d Inode: 1099840816761  Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2022-01-27 16:24:27.713720980 +0100
Modify: 2022-01-27 16:24:27.713720980 +0100
Change: 2022-01-27 16:24:27.713720980 +0100
Birth: -

-- resulting remote file system --

0|0[root@gw-1 ~]# find /data/cephfs-3/test/x2 | xargs stat
 File: /data/cephfs-3/test/x2
 Size: 0               Blocks: 0          IO Block: 65536  directory
Device: 2dh/45d Inode: 1099521812568  Links: 2
Access: (2440/dr--r-S---)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2022-01-27 16:24:15.627783470 +0100
Modify: 2022-01-27 16:24:22.001750514 +0100
Change: 2022-01-27 16:25:53.638392179 +0100
Birth: -

-- log excerpt --

debug 2022-01-27T15:25:42.476+0000 7fe0ffbf0700 20
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
register_directory: dir_root=/test
debug 2022-01-27T15:25:42.476+0000 7fe0ffbf0700 20
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
try_lock_directory: dir_root=/test
debug 2022-01-27T15:25:42.477+0000 7fe0ffbf0700 10
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
try_lock_directory: dir_root=/test locked
debug 2022-01-27T15:25:42.477+0000 7fe0ffbf0700  5
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
register_directory: dir_root=/test registered with
replayer=0x56173a70a680
debug 2022-01-27T15:25:42.477+0000 7fe0ffbf0700 20
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
sync_snaps: dir_root=/test
debug 2022-01-27T15:25:42.477+0000 7fe0ffbf0700 20
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
do_sync_snaps: dir_root=/test
debug 2022-01-27T15:25:42.477+0000 7fe0ffbf0700 20
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
build_snap_map: dir_root=/test, snap_dir=/test/.snap, is_remote=0
debug 2022-01-27T15:25:42.477+0000 7fe0ffbf0700 20
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
build_snap_map: entry=.
debug 2022-01-27T15:25:42.478+0000 7fe0ffbf0700 20
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
build_snap_map: entry=..
debug 2022-01-27T15:25:42.478+0000 7fe0ffbf0700 20
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
build_snap_map: entry=initial
debug 2022-01-27T15:25:42.478+0000 7fe0ffbf0700 20
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
build_snap_map: entry=second
debug 2022-01-27T15:25:42.478+0000 7fe0ffbf0700 10
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
build_snap_map: local snap_map={1384=initial,1385=second}
debug 2022-01-27T15:25:42.478+0000 7fe0ffbf0700 20
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
build_snap_map: dir_root=/test, snap_dir=/test/.snap, is_remote=1
debug 2022-01-27T15:25:42.479+0000 7fe0ffbf0700 20
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
build_snap_map: entry=.
debug 2022-01-27T15:25:42.479+0000 7fe0ffbf0700 20
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
build_snap_map: entry=..
debug 2022-01-27T15:25:42.480+0000 7fe0ffbf0700 20
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
build_snap_map: entry=initial
debug 2022-01-27T15:25:42.480+0000 7fe0ffbf0700 20
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
build_snap_map: snap_path=/test/.snap/initial,
metadata={primary_snap_id=1384}
debug 2022-01-27T15:25:42.480+0000 7fe0ffbf0700 10
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
build_snap_map: remote snap_map={1384=initial}
debug 2022-01-27T15:25:42.480+0000 7fe0ffbf0700  5
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
propagate_snap_deletes: dir_root=/test, deleted snapshots=
debug 2022-01-27T15:25:42.480+0000 7fe0ffbf0700 10
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
propagate_snap_renames: dir_root=/test, renamed snapshots=
debug 2022-01-27T15:25:42.480+0000 7fe0ffbf0700  5
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
do_sync_snaps: last snap-id transferred=1384
debug 2022-01-27T15:25:42.480+0000 7fe0ffbf0700 10
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
do_sync_snaps: synchronizing from snap-id=1385
debug 2022-01-27T15:25:42.480+0000 7fe0ffbf0700 20
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
synchronize: dir_root=/test, current=second,1385
debug 2022-01-27T15:25:42.480+0000 7fe0ffbf0700 20
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
synchronize: prev= initial,1384
debug 2022-01-27T15:25:42.480+0000 7fe0ffbf0700 20
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
synchronize: dirty_snap_id: 1384 vs (1385,1384)
debug 2022-01-27T15:25:42.480+0000 7fe0ffbf0700  5
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
synchronize: match -- using incremental sync with local scan
debug 2022-01-27T15:25:42.480+0000 7fe0ffbf0700 20
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
do_synchronize: dir_root=/test, current=second,1385
debug 2022-01-27T15:25:42.480+0000 7fe0ffbf0700 20
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
do_synchronize: incremental sync check from prev= initial,1384
debug 2022-01-27T15:25:42.480+0000 7fe0ffbf0700 20
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
pre_sync_check_and_open_handles: dir_root=/test, current=second,1385
debug 2022-01-27T15:25:42.480+0000 7fe0ffbf0700 20
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
pre_sync_check_and_open_handles: prev= initial,1384
debug 2022-01-27T15:25:42.480+0000 7fe0ffbf0700 20
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
open_dir: dir_path=/test/.snap/second
debug 2022-01-27T15:25:42.480+0000 7fe0ffbf0700 20
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
open_dir: expected snapshot id=1385
debug 2022-01-27T15:25:42.480+0000 7fe0ffbf0700 20
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
open_dir: dir_path=/test/.snap/initial
debug 2022-01-27T15:25:42.480+0000 7fe0ffbf0700 20
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
open_dir: expected snapshot id=1384
debug 2022-01-27T15:25:42.481+0000 7fe0ffbf0700  5
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
pre_sync_check_and_open_handles: using local (previous) snapshot for
incremental transfer
debug 2022-01-27T15:25:42.485+0000 7fe0ffbf0700 20
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
do_synchronize: 1 entries in stack
debug 2022-01-27T15:25:42.485+0000 7fe0ffbf0700 20
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
do_synchronize: top of stack path=.
debug 2022-01-27T15:25:42.485+0000 7fe0ffbf0700 10
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
propagate_deleted_entries: dir_root=/test, epath=.
debug 2022-01-27T15:25:42.487+0000 7fe0ffbf0700  5
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
propagate_deleted_entries: mode matches for entry=x
debug 2022-01-27T15:25:42.487+0000 7fe0ffbf0700 10
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
propagate_deleted_entries: reached EOD
debug 2022-01-27T15:25:42.487+0000 7fe0ffbf0700 10
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
remote_mkdir: remote epath=./x2
debug 2022-01-27T15:25:42.513+0000 7fe0ffbf0700 20
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
do_synchronize: 2 entries in stack
debug 2022-01-27T15:25:42.513+0000 7fe0ffbf0700 20
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
do_synchronize: top of stack path=./x2
debug 2022-01-27T15:25:42.513+0000 7fe0ffbf0700 10
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
propagate_deleted_entries: dir_root=/test, epath=./x2
debug 2022-01-27T15:25:42.513+0000 7fe0ffbf0700  5
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
propagate_deleted_entries: epath=./x2 missing in previous-snap/remote
dir-root
debug 2022-01-27T15:25:42.514+0000 7fe0ffbf0700 10
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
remote_mkdir: remote epath=./x2/y2
debug 2022-01-27T15:25:42.514+0000 7fe0ffbf0700 -1
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
remote_mkdir: failed to create remote directory=./x2/y2: (13)
Permission denied
debug 2022-01-27T15:25:42.514+0000 7fe0ffbf0700 20
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
do_synchronize: closing local directory=./x2
debug 2022-01-27T15:25:42.514+0000 7fe0ffbf0700 20
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
do_synchronize: closing local directory=.
debug 2022-01-27T15:25:42.514+0000 7fe0ffbf0700 20
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
post_sync_close_handles
debug 2022-01-27T15:25:42.514+0000 7fe0ffbf0700 -1
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
do_sync_snaps: failed to synchronize dir_root=/test, snapshot=second
debug 2022-01-27T15:25:42.514+0000 7fe0ffbf0700 -1
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
sync_snaps: failed to sync snapshots for dir_root=/test
debug 2022-01-27T15:25:42.514+0000 7fe0ffbf0700 20
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
unregister_directory: dir_root=/test
debug 2022-01-27T15:25:42.514+0000 7fe0ffbf0700 20
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
unlock_directory: dir_root=/test
debug 2022-01-27T15:25:42.515+0000 7fe0ffbf0700 10
cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
unlock_directory: dir_root=/test unlocked


On Thu, Jan 27, 2022 at 3:00 PM Venky Shankar <vshankar@xxxxxxxxxx> wrote:
>
> On Wed, Jan 26, 2022 at 2:44 PM Manuel Holtgrewe <zyklenfrei@xxxxxxxxx> wrote:
> >
> > Dear all,
> >
> > I want to mirror a snapshot in Ceph v16.2.6 deployed with cephadm
> > using the stock quay.io images. My source file system has a folder
> > "/src/folder/x" where "/src/folder" has mode "ug=r,o=", in other words
> > no write permissions for the owner (root).
>
> What mode does /src/folder get created on the other cluster?
>
> >
> > The sync of a snapshot "initial" now fails with the following log excerpt.
> >
> > remote_mkdir: remote epath=./src/folder/x
> > remote_mkdir: failed to create remote directory=./src/folder/x: (13)
> > Permission denied
> > do_synchronize: closing local directory=./src/folder
> > do_synchronize: closing local directory=./src/
> > do_synchronize: closing local directory=.
> > post_sync_close_handles
> > do_sync_snaps: failed to synchronize dir_root=/src/folder, snapshot=initial
> > sync_snaps: failed to sync snapshots for dir_root=/src/folder
> >
> > The capabilities on the remote site are:
> >
> > client.mirror-tier-2-remote
> >        key: REDACTED
> >        caps: [mds] allow * fsname=cephfs
> >        caps: [mon] allow r fsname=cephfs
> >        caps: [osd] allow * tag cephfs data=cephfs
> >
> > I also just reported this in the tracker [1]. Can anyone think of a
> > workaround (in the lines of "sudo make me a sandwich ;-)?"
> >
> > Best wishes,
> > Manuel
> >
> > [1] https://tracker.ceph.com/issues/54017
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >
>
>
> --
> Cheers,
> Venky
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux