Re: Ceph Replication not working

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jason,

On Prod side, we have cluster ceph and on DR side we renamed to cephdr

Accordingly, we renamed the ceph.conf to cephdr.conf on DR side.

This setup used to work and one day we tried to promote the DR to verify the replication and since then it's been a nightmare.
The resync didn’t work and then we eventually gave up and deleted the pool on DR side to start afresh.

We deleted and recreated the peer relationship also.

Is there any debugging we can do on Prod or DR side to see where its stopping or waiting while "send_open_image"?

Rbd-mirror is running as "rbd-mirror --cluster=cephdr"


Thanks,
-Vikas

-----Original Message-----
From: Jason Dillaman <jdillama@xxxxxxxxxx> 
Sent: Monday, April 8, 2019 9:30 AM
To: Vikas Rana <vrana@xxxxxxxxxxxx>
Cc: ceph-users <ceph-users@xxxxxxxxxxxxxx>
Subject: Re:  Ceph Replication not working

The log appears to be missing all the librbd log messages. The process seems to stop at attempting to open the image from the remote cluster:

2019-04-05 12:07:29.992323 7f0f3bfff700 20
rbd::mirror::image_replayer::OpenImageRequest: 0x7f0f28018a20 send_open_image

Assuming you are using the default log file naming settings, the log should be located at "/var/log/ceph/ceph-client.mirrorprod.log". Of course, looking at your cluster naming makes me think that since your primary cluster is named "ceph" on the DR-site side, have you changed your "/etc/default/ceph" file to rename the local cluster from "ceph"
to "cephdr" so that the "rbd-mirror" daemon connects to the correct local cluster?


On Fri, Apr 5, 2019 at 3:28 PM Vikas Rana <vrana@xxxxxxxxxxxx> wrote:
>
> Hi Jason,
>
> 12.2.11 is the version.
>
> Attached is the complete log file.
>
> We removed the pool to make sure there's no image left on DR site and recreated an empty pool.
>
> Thanks,
> -Vikas
>
> -----Original Message-----
> From: Jason Dillaman <jdillama@xxxxxxxxxx>
> Sent: Friday, April 5, 2019 2:24 PM
> To: Vikas Rana <vrana@xxxxxxxxxxxx>
> Cc: ceph-users <ceph-users@xxxxxxxxxxxxxx>
> Subject: Re:  Ceph Replication not working
>
> What is the version of rbd-mirror daemon and your OSDs? It looks it found two replicated images and got stuck on the "wait_for_deletion"
> step. Since I suspect those images haven't been deleted, it should have immediately proceeded to the next step of the image replay state machine. Are there any additional log messages after 2019-04-05 12:07:29.981203?
>
> On Fri, Apr 5, 2019 at 1:56 PM Vikas Rana <vrana@xxxxxxxxxxxx> wrote:
> >
> > Hi there,
> >
> > We are trying to setup a rbd-mirror replication and after the setup, everything looks good but images are not replicating.
> >
> >
> >
> > Can some please please help?
> >
> >
> >
> > Thanks,
> >
> > -Vikas
> >
> >
> >
> > root@remote:/var/log/ceph# rbd --cluster cephdr mirror pool info nfs
> >
> > Mode: pool
> >
> > Peers:
> >
> >   UUID                                 NAME CLIENT
> >
> >   bcd54bc5-cd08-435f-a79a-357bce55011d ceph client.mirrorprod
> >
> >
> >
> > root@local:/etc/ceph# rbd  mirror pool info nfs
> >
> > Mode: pool
> >
> > Peers:
> >
> >   UUID                                 NAME   CLIENT
> >
> >   612151cf-f70d-49d0-94e2-a7b850a53e4f cephdr client.mirrordr
> >
> >
> >
> >
> >
> > root@local:/etc/ceph# rbd info nfs/test01
> >
> > rbd image 'test01':
> >
> >         size 102400 kB in 25 objects
> >
> >         order 22 (4096 kB objects)
> >
> >         block_name_prefix: rbd_data.11cd3c238e1f29
> >
> >         format: 2
> >
> >         features: layering, exclusive-lock, object-map, fast-diff, 
> > deep-flatten, journaling
> >
> >         flags:
> >
> >         journal: 11cd3c238e1f29
> >
> >         mirroring state: enabled
> >
> >         mirroring global id: 06fbfe68-b7e4-4d3a-93b2-cd18c569f7f7
> >
> >         mirroring primary: true
> >
> >
> >
> >
> >
> > root@remote:/var/log/ceph# rbd --cluster cephdr mirror pool status 
> > nfs --verbose
> >
> > health: OK
> >
> > images: 0 total
> >
> >
> >
> > root@remote:/var/log/ceph# rbd info nfs/test01
> >
> > rbd: error opening image test01: (2) No such file or directory
> >
> >
> >
> >
> >
> > root@remote:/var/log/ceph# ceph -s --cluster cephdr
> >
> >   cluster:
> >
> >     id:     ade49174-1f84-4c3c-a93c-b293c3655c93
> >
> >     health: HEALTH_WARN
> >
> >             noout,nodeep-scrub flag(s) set
> >
> >
> >
> >   services:
> >
> >     mon:        3 daemons, quorum nidcdvtier1a,nidcdvtier2a,nidcdvtier3a
> >
> >     mgr:        nidcdvtier1a(active), standbys: nidcdvtier2a
> >
> >     osd:        12 osds: 12 up, 12 in
> >
> >                 flags noout,nodeep-scrub
> >
> >     rbd-mirror: 1 daemon active
> >
> >
> >
> >   data:
> >
> >     pools:   5 pools, 640 pgs
> >
> >     objects: 1.32M objects, 5.03TiB
> >
> >     usage:   10.1TiB used, 262TiB / 272TiB avail
> >
> >     pgs:     640 active+clean
> >
> >
> >
> >   io:
> >
> >     client:   170B/s rd, 0B/s wr, 0op/s rd, 0op/s wr
> >
> >
> >
> >
> >
> > 2019-04-05 12:07:29.720742 7f0fa5e284c0  0 ceph version 12.2.11
> > (26dc3775efc7bb286a1d6d66faee0ba30ea23eee) luminous (stable), 
> > process rbd-mirror, pid 3921391
> >
> > 2019-04-05 12:07:29.721752 7f0fa5e284c0  0 pidfile_write: ignore 
> > empty --pid-file
> >
> > 2019-04-05 12:07:29.726580 7f0fa5e284c0 20 rbd::mirror::ServiceDaemon: 0x560200d29bb0 ServiceDaemon:
> >
> > 2019-04-05 12:07:29.732654 7f0fa5e284c0 20 rbd::mirror::ServiceDaemon: 0x560200d29bb0 init:
> >
> > 2019-04-05 12:07:29.734920 7f0fa5e284c0  1 mgrc 
> > service_daemon_register rbd-mirror.admin metadata 
> > {arch=x86_64,ceph_version=ceph version 12.2.11
> > (26dc3775efc7bb286a1d6d66faee0ba30ea23eee) luminous
> > (stable),cpu=Intel(R) Xeon(R) CPU E5-2690 v2 @ 
> > 3.00GHz,distro=ubuntu,distro_description=Ubuntu 14.04.5 
> > LTS,distro_version=14.04,hostname=nidcdvtier3a,instance_id=464360,ke
> > rn
> > el_description=#93 SMP Sat Jun 17 04:01:23 EDT 
> > 2017,kernel_version=3.19.0-85-vtier,mem_swap_kb=67105788,mem_total_k
> > b=
> > 131999112,os=Linux}
> >
> > 2019-04-05 12:07:29.735779 7f0fa5e284c0 20 rbd::mirror::Mirror:
> > 0x560200d27f90 run: enter
> >
> > 2019-04-05 12:07:29.735793 7f0fa5e284c0 20
> > rbd::mirror::ClusterWatcher:0x560200dcd930 refresh_pools: enter
> >
> > 2019-04-05 12:07:29.735809 7f0f77fff700 20 rbd::mirror::ImageDeleter:
> > 0x560200dcd9c0 run: enter
> >
> > 2019-04-05 12:07:29.735819 7f0f77fff700 20 rbd::mirror::ImageDeleter:
> > 0x560200dcd9c0 run: waiting for delete requests
> >
> > 2019-04-05 12:07:29.739019 7f0fa5e284c0 10
> > rbd::mirror::ClusterWatcher:0x560200dcd930 read_pool_peers: 
> > mirroring is disabled for pool docnfs
> >
> > 2019-04-05 12:07:29.741090 7f0fa5e284c0 10
> > rbd::mirror::ClusterWatcher:0x560200dcd930 read_pool_peers: 
> > mirroring is disabled for pool doccifs
> >
> > 2019-04-05 12:07:29.742620 7f0fa5e284c0 10
> > rbd::mirror::ClusterWatcher:0x560200dcd930 read_pool_peers: 
> > mirroring is disabled for pool fcp-dr
> >
> > 2019-04-05 12:07:29.744446 7f0fa5e284c0 10
> > rbd::mirror::ClusterWatcher:0x560200dcd930 read_pool_peers: 
> > mirroring is disabled for pool cifs
> >
> > 2019-04-05 12:07:29.746958 7f0fa5e284c0 20 rbd::mirror::ServiceDaemon:
> > 0x560200d29bb0 add_pool: pool_id=8, pool_name=nfs
> >
> > 2019-04-05 12:07:29.748181 7f0fa5e284c0 20 rbd::mirror::Mirror:
> > 0x560200d27f90 update_pool_replayers: enter
> >
> > 2019-04-05 12:07:29.748212 7f0fa5e284c0 20 rbd::mirror::Mirror:
> > 0x560200d27f90 update_pool_replayers: starting pool replayer for uuid:
> > bcd54bc5-cd08-435f-a79a-357bce55011d cluster: ceph client:
> > client.mirrorprod
> >
> > 2019-04-05 12:07:29.748249 7f0fa5e284c0 20 rbd::mirror::PoolReplayer:
> > 0x560200de8f30 init: replaying for uuid:
> > bcd54bc5-cd08-435f-a79a-357bce55011d cluster: ceph client:
> > client.mirrorprod
> >
> > 2019-04-05 12:07:29.853633 7f0fa5e284c0 20 rbd::mirror::PoolReplayer:
> > 0x560200de8f30 init: connected to uuid:
> > bcd54bc5-cd08-435f-a79a-357bce55011d cluster: ceph client:
> > client.mirrorprod
> >
> > 2019-04-05 12:07:29.853660 7f0fa5e284c0 20 rbd::mirror::InstanceReplayer: 0x560200ff9350 init:
> >
> > 2019-04-05 12:07:29.853747 7f0f97626700 20
> > rbd::mirror::InstanceReplayer: 0x560200ff9350
> > schedule_image_state_check_task: scheduling image state check after 
> > 30 sec (task 0x7f0f88000ba0)
> >
> > 2019-04-05 12:07:29.853855 7f0fa5e284c0 20
> > rbd::mirror::InstanceReplayer: 0x560200ff9350 add_peer:
> > bcd54bc5-cd08-435f-a79a-357bce55011d
> >
> > 2019-04-05 12:07:29.853949 7f0fa5e284c0 20
> > rbd::mirror::InstanceWatcher: 0x560200ff98e0 init: 
> > instance_id=464363
> >
> > 2019-04-05 12:07:29.853955 7f0fa5e284c0 20 rbd::mirror::InstanceWatcher: 0x560200ff98e0 register_instance:
> >
> > 2019-04-05 12:07:29.859103 7f0f57fff700 20
> > rbd::mirror::InstanceWatcher: 0x560200ff98e0 handle_register_instance:
> > r=0
> >
> > 2019-04-05 12:07:29.859125 7f0f57fff700 20 rbd::mirror::InstanceWatcher: 0x560200ff98e0 create_instance_object:
> >
> > 2019-04-05 12:07:29.866499 7f0f57fff700 20
> > rbd::mirror::InstanceWatcher: 0x560200ff98e0
> > handle_create_instance_object: r=0
> >
> > 2019-04-05 12:07:29.866520 7f0f57fff700 20 rbd::mirror::InstanceWatcher: 0x560200ff98e0 register_watch:
> >
> > 2019-04-05 12:07:29.869052 7f0f97626700 20
> > rbd::mirror::InstanceWatcher: 0x560200ff98e0 handle_register_watch:
> > r=0
> >
> > 2019-04-05 12:07:29.869079 7f0f97626700 20 rbd::mirror::InstanceWatcher: 0x560200ff98e0 acquire_lock:
> >
> > 2019-04-05 12:07:29.872993 7f0f97626700 20
> > rbd::mirror::InstanceWatcher: 0x560200ff98e0 handle_acquire_lock: 
> > r=0
> >
> > 2019-04-05 12:07:29.873121 7f0fa5e284c0 20 rbd::mirror::LeaderWatcher:
> > 0x560200fff340 init: notifier_id=464363
> >
> > 2019-04-05 12:07:29.873132 7f0fa5e284c0 20 rbd::mirror::LeaderWatcher: 0x560200fff340 create_leader_object:
> >
> > 2019-04-05 12:07:29.875116 7f0f57fff700 20 rbd::mirror::LeaderWatcher:
> > 0x560200fff340 handle_create_leader_object: r=0
> >
> > 2019-04-05 12:07:29.875129 7f0f57fff700 20 rbd::mirror::LeaderWatcher: 0x560200fff340 register_watch:
> >
> > 2019-04-05 12:07:29.876952 7f0f97626700 20 rbd::mirror::LeaderWatcher:
> > 0x560200fff340 handle_register_watch: r=0
> >
> > 2019-04-05 12:07:29.876964 7f0f97626700 20 rbd::mirror::LeaderWatcher: 0x560200fff340 schedule_acquire_leader_lock:
> >
> > 2019-04-05 12:07:29.876979 7f0f97626700 20 rbd::mirror::LeaderWatcher:
> > 0x560200fff340 schedule_timer_task: scheduling acquire leader lock 
> > after 0 sec (task 0x7f0f880010a0)
> >
> > 2019-04-05 12:07:29.877108 7f0f96e25700 20 rbd::mirror::LeaderWatcher: 0x560200fff340 execute_timer_task:
> >
> > 2019-04-05 12:07:29.877115 7f0f96e25700 20 rbd::mirror::LeaderWatcher:
> > 0x560200fff340 is_leader: 0
> >
> > 2019-04-05 12:07:29.877120 7f0f96e25700 20 rbd::mirror::LeaderWatcher:
> > 0x560200fff340 acquire_leader_lock: acquire_attempts=1
> >
> > 2019-04-05 12:07:29.877204 7f0f3b7fe700 20 rbd::mirror::PoolReplayer:
> > 0x560200de8f30 run: enter
> >
> > 2019-04-05 12:07:29.949781 7f0f97626700 20 rbd::mirror::LeaderWatcher:
> > 0x560200fff340 handle_post_acquire_leader_lock: r=0
> >
> > 2019-04-05 12:07:29.949796 7f0f97626700 20 rbd::mirror::LeaderWatcher: 0x560200fff340 init_status_watcher:
> >
> > 2019-04-05 12:07:29.949825 7f0f97626700 20 rbd::mirror::MirrorStatusWatcher: 0x7f0f880014b0 init:
> >
> > 2019-04-05 12:07:29.962723 7f0f57fff700 20 rbd::mirror::LeaderWatcher:
> > 0x560200fff340 handle_init_status_watcher: r=0
> >
> > 2019-04-05 12:07:29.962735 7f0f57fff700 20 rbd::mirror::LeaderWatcher: 0x560200fff340 init_instances:
> >
> > 2019-04-05 12:07:29.962759 7f0f57fff700 20 rbd::mirror::Instances: 0x7f0f34007070 init:
> >
> > 2019-04-05 12:07:29.962761 7f0f57fff700 20 rbd::mirror::Instances: 0x7f0f34007070 get_instances:
> >
> > 2019-04-05 12:07:29.963359 7f0f57fff700 20
> > rbd::mirror::InstanceWatcher: C_GetInstances: 0x7f0f34007000 finish:
> > r=0
> >
> > 2019-04-05 12:07:29.963378 7f0f57fff700 20 rbd::mirror::Instances:
> > 0x7f0f34007070 handle_get_instances: r=0
> >
> > 2019-04-05 12:07:29.963388 7f0f57fff700 20 rbd::mirror::Instances: 0x7f0f34007070 schedule_remove_task:
> >
> > 2019-04-05 12:07:29.963401 7f0f57fff700 20 rbd::mirror::Instances:
> > 0x7f0f34007070 schedule_remove_task: scheduling instance 464348 
> > remove after 30 sec (task 0x7f0f34007d60)
> >
> > 2019-04-05 12:07:29.963409 7f0f57fff700 20 rbd::mirror::LeaderWatcher:
> > 0x560200fff340 handle_init_instances: r=0
> >
> > 2019-04-05 12:07:29.963411 7f0f57fff700 20 rbd::mirror::LeaderWatcher: 0x560200fff340 notify_listener:
> >
> > 2019-04-05 12:07:29.963413 7f0f57fff700 20 rbd::mirror::LeaderWatcher:
> > 0x560200fff340 is_leader: 1
> >
> > 2019-04-05 12:07:29.963454 7f0f97626700 20 rbd::mirror::PoolReplayer: 0x560200de8f30 handle_post_acquire_leader:
> >
> > 2019-04-05 12:07:29.963466 7f0f97626700 20 rbd::mirror::ServiceDaemon:
> > 0x560200d29bb0 add_or_update_attribute: pool_id=8, key=leader, 
> > value=1
> >
> > 2019-04-05 12:07:29.963474 7f0f97626700 20 rbd::mirror::InstanceWatcher: 0x560200ff98e0 handle_acquire_leader:
> >
> > 2019-04-05 12:07:29.963495 7f0f97626700 20
> > rbd::mirror::ImageSyncThrottler:: 0x7f0f88001140 ImageSyncThrottler:
> > max_concurrent_syncs=5
> >
> > 2019-04-05 12:07:29.963508 7f0f97626700 20 rbd::mirror::InstanceWatcher: 0x560200ff98e0 unsuspend_notify_requests:
> >
> > 2019-04-05 12:07:29.963512 7f0f97626700 20 rbd::mirror::PoolReplayer: 0x560200de8f30 init_local_pool_watcher:
> >
> > 2019-04-05 12:07:29.963521 7f0f97626700  5 rbd::mirror::PoolWatcher: 0x7f0f88001d30 init:
> >
> > 2019-04-05 12:07:29.963524 7f0f97626700  5 rbd::mirror::PoolWatcher: 0x7f0f88001d30 register_watcher:
> >
> > 2019-04-05 12:07:29.965811 7f0f57fff700  5 rbd::mirror::PoolWatcher:
> > 0x7f0f88001d30 handle_register_watcher: r=0
> >
> > 2019-04-05 12:07:29.965824 7f0f57fff700  5 rbd::mirror::PoolWatcher: 0x7f0f88001d30 refresh_images:
> >
> > 2019-04-05 12:07:29.965827 7f0f57fff700 10 rbd::mirror::pool_watcher::RefreshImagesRequest 0x7f0f34006850 mirror_image_list:
> >
> > 2019-04-05 12:07:29.966756 7f0f57fff700 10 
> > rbd::mirror::pool_watcher::RefreshImagesRequest 0x7f0f34006850
> > handle_mirror_image_list: r=0
> >
> > 2019-04-05 12:07:29.966771 7f0f57fff700 10 
> > rbd::mirror::pool_watcher::RefreshImagesRequest 0x7f0f34006850 finish:
> > r=0
> >
> > 2019-04-05 12:07:29.966775 7f0f57fff700  5 rbd::mirror::PoolWatcher:
> > 0x7f0f88001d30 handle_refresh_images: r=0
> >
> > 2019-04-05 12:07:29.966777 7f0f57fff700  5 rbd::mirror::PoolWatcher: 0x7f0f88001d30 get_mirror_uuid:
> >
> > 2019-04-05 12:07:29.967653 7f0f57fff700  5 rbd::mirror::PoolWatcher:
> > 0x7f0f88001d30 handle_get_mirror_uuid: r=0
> >
> > 2019-04-05 12:07:29.967668 7f0f57fff700 10 rbd::mirror::PoolWatcher:
> > 0x7f0f88001d30 handle_get_mirror_uuid:
> > mirror_uuid=c8c063e4-9a2c-4e13-a9d3-2786a1dbc645
> >
> > 2019-04-05 12:07:29.967673 7f0f57fff700 20 rbd::mirror::PoolWatcher: 0x7f0f88001d30 schedule_listener:
> >
> > 2019-04-05 12:07:29.967718 7f0f97626700 10 rbd::mirror::PoolWatcher: 0x7f0f88001d30 notify_listener:
> >
> > 2019-04-05 12:07:29.967733 7f0f97626700 10 rbd::mirror::PoolReplayer:
> > 0x560200de8f30 handle_update: mirror_uuid=, added_count=0,
> > removed_count=0
> >
> > 2019-04-05 12:07:29.967740 7f0f97626700 20 rbd::mirror::LeaderWatcher:
> > 0x560200fff340 is_leader: 1
> >
> > 2019-04-05 12:07:29.967743 7f0f97626700 20 rbd::mirror::ServiceDaemon:
> > 0x560200d29bb0 add_or_update_attribute: pool_id=8, 
> > key=image_local_count, value=0
> >
> > 2019-04-05 12:07:29.967751 7f0f97626700 20 rbd::mirror::PoolReplayer:
> > 0x560200de8f30 operator(): complete handle_update: r=0
> >
> > 2019-04-05 12:07:29.967757 7f0f97626700 20 rbd::mirror::PoolReplayer:
> > 0x560200de8f30 handle_init_local_pool_watcher: r=0
> >
> > 2019-04-05 12:07:29.967761 7f0f97626700 20 rbd::mirror::PoolReplayer: 0x560200de8f30 init_remote_pool_watcher:
> >
> > 2019-04-05 12:07:29.967771 7f0f97626700  5 rbd::mirror::PoolWatcher: 0x7f0f880048a0 init:
> >
> > 2019-04-05 12:07:29.967775 7f0f97626700  5 rbd::mirror::PoolWatcher: 0x7f0f880048a0 register_watcher:
> >
> > 2019-04-05 12:07:29.977576 7f0f3bfff700  5 rbd::mirror::PoolWatcher:
> > 0x7f0f880048a0 handle_register_watcher: r=0
> >
> > 2019-04-05 12:07:29.977588 7f0f3bfff700  5 rbd::mirror::PoolWatcher: 0x7f0f880048a0 refresh_images:
> >
> > 2019-04-05 12:07:29.977591 7f0f3bfff700 10 rbd::mirror::pool_watcher::RefreshImagesRequest 0x7f0f28000ac0 mirror_image_list:
> >
> > 2019-04-05 12:07:29.979217 7f0f3bfff700 10 
> > rbd::mirror::pool_watcher::RefreshImagesRequest 0x7f0f28000ac0
> > handle_mirror_image_list: r=0
> >
> > 2019-04-05 12:07:29.979252 7f0f3bfff700 10 
> > rbd::mirror::pool_watcher::RefreshImagesRequest 0x7f0f28000ac0 finish:
> > r=0
> >
> > 2019-04-05 12:07:29.979254 7f0f3bfff700  5 rbd::mirror::PoolWatcher:
> > 0x7f0f880048a0 handle_refresh_images: r=0
> >
> > 2019-04-05 12:07:29.979256 7f0f3bfff700  5 rbd::mirror::PoolWatcher: 0x7f0f880048a0 get_mirror_uuid:
> >
> > 2019-04-05 12:07:29.980796 7f0f3bfff700  5 rbd::mirror::PoolWatcher:
> > 0x7f0f880048a0 handle_get_mirror_uuid: r=0
> >
> > 2019-04-05 12:07:29.980827 7f0f3bfff700 10 rbd::mirror::PoolWatcher:
> > 0x7f0f880048a0 handle_get_mirror_uuid:
> > mirror_uuid=8b79b39c-8a08-45c1-9d32-1d87866b036b
> >
> > 2019-04-05 12:07:29.980830 7f0f3bfff700 20 rbd::mirror::PoolWatcher: 0x7f0f880048a0 schedule_listener:
> >
> > 2019-04-05 12:07:29.980871 7f0f97626700 10 rbd::mirror::PoolWatcher: 0x7f0f880048a0 notify_listener:
> >
> > 2019-04-05 12:07:29.980903 7f0f97626700 10 rbd::mirror::PoolReplayer:
> > 0x560200de8f30 handle_update:
> > mirror_uuid=8b79b39c-8a08-45c1-9d32-1d87866b036b, added_count=2,
> > removed_count=0
> >
> > 2019-04-05 12:07:29.980910 7f0f97626700 20 rbd::mirror::LeaderWatcher:
> > 0x560200fff340 is_leader: 1
> >
> > 2019-04-05 12:07:29.980913 7f0f97626700 20 rbd::mirror::ServiceDaemon:
> > 0x560200d29bb0 add_or_update_attribute: pool_id=8, 
> > key=image_local_count, value=0
> >
> > 2019-04-05 12:07:29.980919 7f0f97626700 20 rbd::mirror::ServiceDaemon:
> > 0x560200d29bb0 add_or_update_attribute: pool_id=8, 
> > key=image_remote_count, value=2
> >
> > 2019-04-05 12:07:29.980928 7f0f97626700 20
> > rbd::mirror::InstanceWatcher: 0x560200ff98e0 notify_image_acquire:
> > instance_id=464363,
> > global_image_id=06fbfe68-b7e4-4d3a-93b2-cd18c569f7f7
> >
> > 2019-04-05 12:07:29.980934 7f0f97626700 20
> > rbd::mirror::InstanceWatcher: 0x560200ff98e0 handle_image_acquire:
> > global_image_id=06fbfe68-b7e4-4d3a-93b2-cd18c569f7f7
> >
> > 2019-04-05 12:07:29.980941 7f0f97626700 20
> > rbd::mirror::InstanceWatcher: 0x560200ff98e0 notify_image_acquire:
> > instance_id=464363,
> > global_image_id=92f46320-d43d-48eb-8a09-b68a1945cc77
> >
> > 2019-04-05 12:07:29.980943 7f0f97626700 20
> > rbd::mirror::InstanceWatcher: 0x560200ff98e0 handle_image_acquire:
> > global_image_id=92f46320-d43d-48eb-8a09-b68a1945cc77
> >
> > 2019-04-05 12:07:29.980961 7f0f97626700 20
> > rbd::mirror::InstanceReplayer: 0x560200ff9350 acquire_image:
> > global_image_id=06fbfe68-b7e4-4d3a-93b2-cd18c569f7f7
> >
> > 2019-04-05 12:07:29.980992 7f0f97626700 15 rbd::mirror::ImageReplayer:
> > 0x7f0f8800b1f0 [8/06fbfe68-b7e4-4d3a-93b2-cd18c569f7f7]
> > register_admin_socket_hook: registered asok hook:
> > nfs/06fbfe68-b7e4-4d3a-93b2-cd18c569f7f7
> >
> > 2019-04-05 12:07:29.981132 7f0f97626700 20
> > rbd::mirror::InstanceReplayer: 0x560200ff9350 acquire_image:
> > 06fbfe68-b7e4-4d3a-93b2-cd18c569f7f7: creating replayer 
> > 0x7f0f8800b1f0
> >
> > 2019-04-05 12:07:29.981140 7f0f97626700 20
> > rbd::mirror::InstanceReplayer: 0x560200ff9350 start_image_replayer:
> > global_image_id=06fbfe68-b7e4-4d3a-93b2-cd18c569f7f7
> >
> > 2019-04-05 12:07:29.981142 7f0f97626700 20 rbd::mirror::ImageReplayer:
> > 0x7f0f8800b1f0 [8/06fbfe68-b7e4-4d3a-93b2-cd18c569f7f7] start:
> > on_finish=0
> >
> > 2019-04-05 12:07:29.981152 7f0f97626700 20 rbd::mirror::ImageReplayer: 0x7f0f8800b1f0 [8/06fbfe68-b7e4-4d3a-93b2-cd18c569f7f7] wait_for_deletion:
> >
> > 2019-04-05 12:07:29.981159 7f0f97626700 20
> > rbd::mirror::InstanceReplayer: 0x560200ff9350 acquire_image:
> > global_image_id=92f46320-d43d-48eb-8a09-b68a1945cc77
> >
> > 2019-04-05 12:07:29.981167 7f0f97626700 15 rbd::mirror::ImageReplayer:
> > 0x7f0f8800d320 [8/92f46320-d43d-48eb-8a09-b68a1945cc77]
> > register_admin_socket_hook: registered asok hook:
> > nfs/92f46320-d43d-48eb-8a09-b68a1945cc77
> >
> > 2019-04-05 12:07:29.981196 7f0f97626700 20
> > rbd::mirror::InstanceReplayer: 0x560200ff9350 acquire_image:
> > 92f46320-d43d-48eb-8a09-b68a1945cc77: creating replayer 
> > 0x7f0f8800d320
> >
> > 2019-04-05 12:07:29.981200 7f0f97626700 20
> > rbd::mirror::InstanceReplayer: 0x560200ff9350 start_image_replayer:
> > global_image_id=92f46320-d43d-48eb-8a09-b68a1945cc77
> >
> > 2019-04-05 12:07:29.981201 7f0f97626700 20 rbd::mirror::ImageReplayer:
> > 0x7f0f8800d320 [8/92f46320-d43d-48eb-8a09-b68a1945cc77] start:
> > on_finish=0
> >
> > 2019-04-05 12:07:29.981203 7f0f97626700 20 rbd::mirror::ImageReplayer: 0x7f0f8800d320 [8/92f46320-d43d-48eb-8a09-b68a1945cc77] wait_for_deletion:
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
> --
> Jason



--
Jason

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux