Re: Limitations of ceph fs snapshot mirror for read-only folders?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jan 28, 2022 at 4:22 PM Manuel Holtgrewe <zyklenfrei@xxxxxxxxx> wrote:
>
> OK, so there is a different in semantics of the kernel and the user
> space driver?

Right.

>
> Which one would you consider to be desired?

The kernel driver is probably doing the right thing.

>
> From what I can see, the kernel semantics (apparently: root can do
> everything) would allow to sync between file systems no matter what.
> With the current user space semantics, users could `chmod a=` folders
> in their $HOME and stop the sync from working. Is my interpretation
> correct?

Correct.

I haven't root caused the issue with the user space driver yet. This
blocks using the cephfs-mirror daemon with read-only source
directories.

I'll file a tracker for this. Thanks for your help.

>
> Best wishes,
> Manuel
>
> On Fri, Jan 28, 2022 at 11:43 AM Venky Shankar <vshankar@xxxxxxxxxx> wrote:
> >
> > On Fri, Jan 28, 2022 at 3:42 PM Manuel Holtgrewe <zyklenfrei@xxxxxxxxx> wrote:
> > >
> > > I'm running rsync "-Wa", see below for a reproduction from scratch
> > > that actually syncs as root when no permissions are given on the
> > > directories.
> > >
> > > -- full mount options --
> > >
> > > 172.16.62.10,172.16.62.11,172.16.62.11,172.16.62.12,172.16.62.13,172.16.62.30:/
> > > on /data/cephfs-2 type ceph
> > > (rw,noatime,name=samba,secret=<hidden>,acl)
> > > 172.16.62.22,172.16.62.23,172.16.62.23,172.16.62.24,172.16.62.25,172.16.62.32:/
> > > on /data/cephfs-3 type ceph
> > > (rw,noatime,name=gateway,secret=<hidden>,rbytes,acl)
> > >
> > > -- example --
> > >
> > > 0|0[root@gw-1 ~]# mkdir -p /data/cephfs-2/test2/x/y
> > > 0|0[root@gw-1 ~]# touch !$z
> > > touch /data/cephfs-2/test2/x/yz
> > > 0|0[root@gw-1 ~]# chmod a= -R /data/cephfs-2/test2
> > > 0|0[root@gw-1 ~]# mkdir /data/cephfs-3/test2
> > > 0|0[root@gw-1 ~]# rsync -va /data/cephfs-2/test2/. /data/cephfs-3/test2/.
> > > sending incremental file list
> > > ./
> > > x/
> > > x/yz
> > > x/y/
> >
> > Try running this from a ceph-fuse mount - it would fail. It's probably
> > related to the way how permission checks are done (we may want to fix
> > that in the user-space driver).
> >
> > Since the mirror daemon uses the user-space library, it would be
> > running into the same permission related constraints as ceph-fuse.
> >
> > >
> > > sent 165 bytes  received 50 bytes  430.00 bytes/sec
> > > total size is 0  speedup is 0.00
> > > 0|0[root@gw-1 ~]# find /data/cephfs-3/test2 | xargs stat
> > >   File: /data/cephfs-3/test2
> > >   Size: 0               Blocks: 0          IO Block: 65536  directory
> > > Device: 2dh/45d Inode: 1099522341053  Links: 3
> > > Access: (0000/d---------)  Uid: (    0/    root)   Gid: (    0/    root)
> > > Access: 2022-01-28 11:10:31.436380533 +0100
> > > Modify: 2022-01-28 11:09:47.666606846 +0100
> > > Change: 2022-01-28 11:10:31.436380533 +0100
> > >  Birth: -
> > >   File: /data/cephfs-3/test2/x
> > >   Size: 0               Blocks: 0          IO Block: 65536  directory
> > > Device: 2dh/45d Inode: 1099522341054  Links: 3
> > > Access: (0000/d---------)  Uid: (    0/    root)   Gid: (    0/    root)
> > > Access: 2022-01-28 11:10:31.462380399 +0100
> > > Modify: 2022-01-28 11:09:49.258598614 +0100
> > > Change: 2022-01-28 11:10:31.462380399 +0100
> > >  Birth: -
> > >   File: /data/cephfs-3/test2/x/yz
> > >   Size: 0               Blocks: 0          IO Block: 4194304 regular empty file
> > > Device: 2dh/45d Inode: 1099522341056  Links: 1
> > > Access: (0000/----------)  Uid: (    0/    root)   Gid: (    0/    root)
> > > Access: 2022-01-28 11:10:31.447380476 +0100
> > > Modify: 2022-01-28 11:09:49.265598578 +0100
> > > Change: 2022-01-28 11:10:31.447380476 +0100
> > >  Birth: -
> > >   File: /data/cephfs-3/test2/x/y
> > >   Size: 0               Blocks: 0          IO Block: 65536  directory
> > > Device: 2dh/45d Inode: 1099522341055  Links: 2
> > > Access: (0000/d---------)  Uid: (    0/    root)   Gid: (    0/    root)
> > > Access: 2022-01-28 11:10:31.439380518 +0100
> > > Modify: 2022-01-28 11:09:47.669606830 +0100
> > > Change: 2022-01-28 11:10:31.439380518 +0100
> > >  Birth: -
> > >
> > > On Fri, Jan 28, 2022 at 11:06 AM Venky Shankar <vshankar@xxxxxxxxxx> wrote:
> > > >
> > > > On Fri, Jan 28, 2022 at 3:20 PM Manuel Holtgrewe <zyklenfrei@xxxxxxxxx> wrote:
> > > > >
> > > > > Hi,
> > > > >
> > > > > thanks for the reply.
> > > > >
> > > > > Actually, mounting the source and remote fs on linux with kernel
> > > > > driver (Rocky Linux 8.5 default kernel), I can `rsync`.
> > > >
> > > > You are probably running rsync with --no-perms or a custom --chmod (or
> > > > one of --no-o, --no-g) I guess?
> > > >
> > > > >
> > > > > Is this to be expected?
> > > > >
> > > > > Cheers,
> > > > >
> > > > > On Fri, Jan 28, 2022 at 10:44 AM Venky Shankar <vshankar@xxxxxxxxxx> wrote:
> > > > > >
> > > > > > Hey Manuel,
> > > > > >
> > > > > > On Thu, Jan 27, 2022 at 8:57 PM Manuel Holtgrewe <zyklenfrei@xxxxxxxxx> wrote:
> > > > > > >
> > > > > > > OK, reconstructed with another example:
> > > > > > >
> > > > > > > -- source file system --
> > > > > > >
> > > > > > > 0|0[root@gw-1 ~]# find /data/cephfs-2/test/x2 | xargs stat
> > > > > > >  File: /data/cephfs-2/test/x2
> > > > > > >  Size: 1               Blocks: 0          IO Block: 65536  directory
> > > > > > > Device: 2ch/44d Inode: 1099840816759  Links: 3
> > > > > > > Access: (2440/dr--r-S---)  Uid: (    0/    root)   Gid: (    0/    root)
> > > > > > > Access: 2022-01-27 16:24:15.627783470 +0100
> > > > > > > Modify: 2022-01-27 16:24:22.001750514 +0100
> > > > > > > Change: 2022-01-27 16:24:51.294599055 +0100
> > > > > > > Birth: -
> > > > > > >  File: /data/cephfs-2/test/x2/y2
> > > > > > >  Size: 1               Blocks: 0          IO Block: 65536  directory
> > > > > > > Device: 2ch/44d Inode: 1099840816760  Links: 2
> > > > > > > Access: (2440/dr--r-S---)  Uid: (    0/    root)   Gid: (    0/    root)
> > > > > > > Access: 2022-01-27 16:24:22.001750514 +0100
> > > > > > > Modify: 2022-01-27 16:24:27.712720985 +0100
> > > > > > > Change: 2022-01-27 16:24:51.307598988 +0100
> > > > > > > Birth: -
> > > > > > >  File: /data/cephfs-2/test/x2/y2/z
> > > > > > >  Size: 0               Blocks: 0          IO Block: 4194304 regular empty file
> > > > > > > Device: 2ch/44d Inode: 1099840816761  Links: 1
> > > > > > > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
> > > > > > > Access: 2022-01-27 16:24:27.713720980 +0100
> > > > > > > Modify: 2022-01-27 16:24:27.713720980 +0100
> > > > > > > Change: 2022-01-27 16:24:27.713720980 +0100
> > > > > > > Birth: -
> > > > > > >
> > > > > > > -- resulting remote file system --
> > > > > > >
> > > > > > > 0|0[root@gw-1 ~]# find /data/cephfs-3/test/x2 | xargs stat
> > > > > > >  File: /data/cephfs-3/test/x2
> > > > > > >  Size: 0               Blocks: 0          IO Block: 65536  directory
> > > > > > > Device: 2dh/45d Inode: 1099521812568  Links: 2
> > > > > > > Access: (2440/dr--r-S---)  Uid: (    0/    root)   Gid: (    0/    root)
> > > > > > > Access: 2022-01-27 16:24:15.627783470 +0100
> > > > > > > Modify: 2022-01-27 16:24:22.001750514 +0100
> > > > > > > Change: 2022-01-27 16:25:53.638392179 +0100
> > > > > > > Birth: -
> > > > > >
> > > > > > The mirror daemon requires write access to a directory to update
> > > > > > entries (it uses libcephfs with uid/gid 0:0). The mode/ownership
> > > > > > changes are applied after creating the entry on the other cluster.
> > > > > >
> > > > > > There's probably no "quick" workarounds for this, I'm afraid.
> > > > > >
> > > > > > >
> > > > > > > -- log excerpt --
> > > > > > >
> > > > > > > debug 2022-01-27T15:25:42.476+0000 7fe0ffbf0700 20
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > register_directory: dir_root=/test
> > > > > > > debug 2022-01-27T15:25:42.476+0000 7fe0ffbf0700 20
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > try_lock_directory: dir_root=/test
> > > > > > > debug 2022-01-27T15:25:42.477+0000 7fe0ffbf0700 10
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > try_lock_directory: dir_root=/test locked
> > > > > > > debug 2022-01-27T15:25:42.477+0000 7fe0ffbf0700  5
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > register_directory: dir_root=/test registered with
> > > > > > > replayer=0x56173a70a680
> > > > > > > debug 2022-01-27T15:25:42.477+0000 7fe0ffbf0700 20
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > sync_snaps: dir_root=/test
> > > > > > > debug 2022-01-27T15:25:42.477+0000 7fe0ffbf0700 20
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > do_sync_snaps: dir_root=/test
> > > > > > > debug 2022-01-27T15:25:42.477+0000 7fe0ffbf0700 20
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > build_snap_map: dir_root=/test, snap_dir=/test/.snap, is_remote=0
> > > > > > > debug 2022-01-27T15:25:42.477+0000 7fe0ffbf0700 20
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > build_snap_map: entry=.
> > > > > > > debug 2022-01-27T15:25:42.478+0000 7fe0ffbf0700 20
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > build_snap_map: entry=..
> > > > > > > debug 2022-01-27T15:25:42.478+0000 7fe0ffbf0700 20
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > build_snap_map: entry=initial
> > > > > > > debug 2022-01-27T15:25:42.478+0000 7fe0ffbf0700 20
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > build_snap_map: entry=second
> > > > > > > debug 2022-01-27T15:25:42.478+0000 7fe0ffbf0700 10
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > build_snap_map: local snap_map={1384=initial,1385=second}
> > > > > > > debug 2022-01-27T15:25:42.478+0000 7fe0ffbf0700 20
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > build_snap_map: dir_root=/test, snap_dir=/test/.snap, is_remote=1
> > > > > > > debug 2022-01-27T15:25:42.479+0000 7fe0ffbf0700 20
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > build_snap_map: entry=.
> > > > > > > debug 2022-01-27T15:25:42.479+0000 7fe0ffbf0700 20
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > build_snap_map: entry=..
> > > > > > > debug 2022-01-27T15:25:42.480+0000 7fe0ffbf0700 20
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > build_snap_map: entry=initial
> > > > > > > debug 2022-01-27T15:25:42.480+0000 7fe0ffbf0700 20
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > build_snap_map: snap_path=/test/.snap/initial,
> > > > > > > metadata={primary_snap_id=1384}
> > > > > > > debug 2022-01-27T15:25:42.480+0000 7fe0ffbf0700 10
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > build_snap_map: remote snap_map={1384=initial}
> > > > > > > debug 2022-01-27T15:25:42.480+0000 7fe0ffbf0700  5
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > propagate_snap_deletes: dir_root=/test, deleted snapshots=
> > > > > > > debug 2022-01-27T15:25:42.480+0000 7fe0ffbf0700 10
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > propagate_snap_renames: dir_root=/test, renamed snapshots=
> > > > > > > debug 2022-01-27T15:25:42.480+0000 7fe0ffbf0700  5
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > do_sync_snaps: last snap-id transferred=1384
> > > > > > > debug 2022-01-27T15:25:42.480+0000 7fe0ffbf0700 10
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > do_sync_snaps: synchronizing from snap-id=1385
> > > > > > > debug 2022-01-27T15:25:42.480+0000 7fe0ffbf0700 20
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > synchronize: dir_root=/test, current=second,1385
> > > > > > > debug 2022-01-27T15:25:42.480+0000 7fe0ffbf0700 20
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > synchronize: prev= initial,1384
> > > > > > > debug 2022-01-27T15:25:42.480+0000 7fe0ffbf0700 20
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > synchronize: dirty_snap_id: 1384 vs (1385,1384)
> > > > > > > debug 2022-01-27T15:25:42.480+0000 7fe0ffbf0700  5
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > synchronize: match -- using incremental sync with local scan
> > > > > > > debug 2022-01-27T15:25:42.480+0000 7fe0ffbf0700 20
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > do_synchronize: dir_root=/test, current=second,1385
> > > > > > > debug 2022-01-27T15:25:42.480+0000 7fe0ffbf0700 20
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > do_synchronize: incremental sync check from prev= initial,1384
> > > > > > > debug 2022-01-27T15:25:42.480+0000 7fe0ffbf0700 20
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > pre_sync_check_and_open_handles: dir_root=/test, current=second,1385
> > > > > > > debug 2022-01-27T15:25:42.480+0000 7fe0ffbf0700 20
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > pre_sync_check_and_open_handles: prev= initial,1384
> > > > > > > debug 2022-01-27T15:25:42.480+0000 7fe0ffbf0700 20
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > open_dir: dir_path=/test/.snap/second
> > > > > > > debug 2022-01-27T15:25:42.480+0000 7fe0ffbf0700 20
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > open_dir: expected snapshot id=1385
> > > > > > > debug 2022-01-27T15:25:42.480+0000 7fe0ffbf0700 20
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > open_dir: dir_path=/test/.snap/initial
> > > > > > > debug 2022-01-27T15:25:42.480+0000 7fe0ffbf0700 20
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > open_dir: expected snapshot id=1384
> > > > > > > debug 2022-01-27T15:25:42.481+0000 7fe0ffbf0700  5
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > pre_sync_check_and_open_handles: using local (previous) snapshot for
> > > > > > > incremental transfer
> > > > > > > debug 2022-01-27T15:25:42.485+0000 7fe0ffbf0700 20
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > do_synchronize: 1 entries in stack
> > > > > > > debug 2022-01-27T15:25:42.485+0000 7fe0ffbf0700 20
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > do_synchronize: top of stack path=.
> > > > > > > debug 2022-01-27T15:25:42.485+0000 7fe0ffbf0700 10
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > propagate_deleted_entries: dir_root=/test, epath=.
> > > > > > > debug 2022-01-27T15:25:42.487+0000 7fe0ffbf0700  5
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > propagate_deleted_entries: mode matches for entry=x
> > > > > > > debug 2022-01-27T15:25:42.487+0000 7fe0ffbf0700 10
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > propagate_deleted_entries: reached EOD
> > > > > > > debug 2022-01-27T15:25:42.487+0000 7fe0ffbf0700 10
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > remote_mkdir: remote epath=./x2
> > > > > > > debug 2022-01-27T15:25:42.513+0000 7fe0ffbf0700 20
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > do_synchronize: 2 entries in stack
> > > > > > > debug 2022-01-27T15:25:42.513+0000 7fe0ffbf0700 20
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > do_synchronize: top of stack path=./x2
> > > > > > > debug 2022-01-27T15:25:42.513+0000 7fe0ffbf0700 10
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > propagate_deleted_entries: dir_root=/test, epath=./x2
> > > > > > > debug 2022-01-27T15:25:42.513+0000 7fe0ffbf0700  5
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > propagate_deleted_entries: epath=./x2 missing in previous-snap/remote
> > > > > > > dir-root
> > > > > > > debug 2022-01-27T15:25:42.514+0000 7fe0ffbf0700 10
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > remote_mkdir: remote epath=./x2/y2
> > > > > > > debug 2022-01-27T15:25:42.514+0000 7fe0ffbf0700 -1
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > remote_mkdir: failed to create remote directory=./x2/y2: (13)
> > > > > > > Permission denied
> > > > > > > debug 2022-01-27T15:25:42.514+0000 7fe0ffbf0700 20
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > do_synchronize: closing local directory=./x2
> > > > > > > debug 2022-01-27T15:25:42.514+0000 7fe0ffbf0700 20
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > do_synchronize: closing local directory=.
> > > > > > > debug 2022-01-27T15:25:42.514+0000 7fe0ffbf0700 20
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > post_sync_close_handles
> > > > > > > debug 2022-01-27T15:25:42.514+0000 7fe0ffbf0700 -1
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > do_sync_snaps: failed to synchronize dir_root=/test, snapshot=second
> > > > > > > debug 2022-01-27T15:25:42.514+0000 7fe0ffbf0700 -1
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > sync_snaps: failed to sync snapshots for dir_root=/test
> > > > > > > debug 2022-01-27T15:25:42.514+0000 7fe0ffbf0700 20
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > unregister_directory: dir_root=/test
> > > > > > > debug 2022-01-27T15:25:42.514+0000 7fe0ffbf0700 20
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > unlock_directory: dir_root=/test
> > > > > > > debug 2022-01-27T15:25:42.515+0000 7fe0ffbf0700 10
> > > > > > > cephfs::mirror::PeerReplayer(f477cfed-6270-4beb-aaa1-a41df7b58955)
> > > > > > > unlock_directory: dir_root=/test unlocked
> > > > > > >
> > > > > > >
> > > > > > > On Thu, Jan 27, 2022 at 3:00 PM Venky Shankar <vshankar@xxxxxxxxxx> wrote:
> > > > > > > >
> > > > > > > > On Wed, Jan 26, 2022 at 2:44 PM Manuel Holtgrewe <zyklenfrei@xxxxxxxxx> wrote:
> > > > > > > > >
> > > > > > > > > Dear all,
> > > > > > > > >
> > > > > > > > > I want to mirror a snapshot in Ceph v16.2.6 deployed with cephadm
> > > > > > > > > using the stock quay.io images. My source file system has a folder
> > > > > > > > > "/src/folder/x" where "/src/folder" has mode "ug=r,o=", in other words
> > > > > > > > > no write permissions for the owner (root).
> > > > > > > >
> > > > > > > > What mode does /src/folder get created on the other cluster?
> > > > > > > >
> > > > > > > > >
> > > > > > > > > The sync of a snapshot "initial" now fails with the following log excerpt.
> > > > > > > > >
> > > > > > > > > remote_mkdir: remote epath=./src/folder/x
> > > > > > > > > remote_mkdir: failed to create remote directory=./src/folder/x: (13)
> > > > > > > > > Permission denied
> > > > > > > > > do_synchronize: closing local directory=./src/folder
> > > > > > > > > do_synchronize: closing local directory=./src/
> > > > > > > > > do_synchronize: closing local directory=.
> > > > > > > > > post_sync_close_handles
> > > > > > > > > do_sync_snaps: failed to synchronize dir_root=/src/folder, snapshot=initial
> > > > > > > > > sync_snaps: failed to sync snapshots for dir_root=/src/folder
> > > > > > > > >
> > > > > > > > > The capabilities on the remote site are:
> > > > > > > > >
> > > > > > > > > client.mirror-tier-2-remote
> > > > > > > > >        key: REDACTED
> > > > > > > > >        caps: [mds] allow * fsname=cephfs
> > > > > > > > >        caps: [mon] allow r fsname=cephfs
> > > > > > > > >        caps: [osd] allow * tag cephfs data=cephfs
> > > > > > > > >
> > > > > > > > > I also just reported this in the tracker [1]. Can anyone think of a
> > > > > > > > > workaround (in the lines of "sudo make me a sandwich ;-)?"
> > > > > > > > >
> > > > > > > > > Best wishes,
> > > > > > > > > Manuel
> > > > > > > > >
> > > > > > > > > [1] https://tracker.ceph.com/issues/54017
> > > > > > > > > _______________________________________________
> > > > > > > > > ceph-users mailing list -- ceph-users@xxxxxxx
> > > > > > > > > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> > > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > --
> > > > > > > > Cheers,
> > > > > > > > Venky
> > > > > > > >
> > > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > Cheers,
> > > > > > Venky
> > > > > >
> > > > >
> > > >
> > > >
> > > > --
> > > > Cheers,
> > > > Venky
> > > >
> > >
> >
> >
> > --
> > Cheers,
> > Venky
> >
>


-- 
Cheers,
Venky

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux