Yes, we do one-way replication and the 'remote' cluster is the secondary cluster, so the rbd-mirror daemon is there. We can confirm the daemon is working because we observed IO workload. And the remote cluster is actually bigger than the 'local’ cluster so it should be able to keep up with the IO workload. So it is confusing why there are so many journal data that cannot be trimmed immediately. (Local cluster also has capability to do more IO workload including trimming operations.)
On Tue, Nov 6, 2018 at 1:12 AM Wei Jin <wjin.cn@xxxxxxxxx> wrote: Thanks. I found that both minimum and active set are very large in my cluster, is it expected? By the way, I do snapshot for each image half an hour,and keep snapshots for two days.
Journal status:
minimum_set: 671839 active_set: 1197917 registered clients: [id=, commit_position=[positions=[[object_number=4791670, tag_tid=3, entry_tid=4146742458], [object_number=4791669, tag_tid=3, entry_tid=4146742457], [object_number=4791668, tag_tid=3, entry_tid=4146742456], [object_number=4791671, tag_tid=3, entry_tid=4146742455]]], state=connected] [id=89024ad3-57a7-42cc-99d4-67f33b093704, commit_position=[positions=[[object_number=2687357, tag_tid=3, entry_tid=1188516421], [object_number=2687356, tag_tid=3, entry_tid=1188516420], [object_number=2687359, tag_tid=3, entry_tid=1188516419], [object_number=2687358, tag_tid=3, entry_tid=1188516418]]], state=connected]
Are you attempting to run "rbd-mirror" daemon on a remote cluster? Itjust appears like either the daemon is not running or that it's so farbehind that it's just not able to keep up with the IO workload of theimage. You can run "rbd journal disconnect --image <image-id>--client-id=89024ad3-57a7-42cc-99d4-67f33b093704" to force-disconnectthe remote client and start the journal trimming process.On Nov 6, 2018, at 3:39 AM, Jason Dillaman <jdillama@xxxxxxxxxx> wrote:
On Sun, Nov 4, 2018 at 11:59 PM Wei Jin <wjin.cn@xxxxxxxxx> wrote:
Hi, Jason,
I have a question about rbd mirroring. When enable mirroring, we observed that there are a lot of objects prefix with journal_data, thus it consumes a lot of disk space.
When will these journal objects be deleted? And are there any parameters to accelerate it? Thanks.
Journal data objects should be automatically deleted when the journal is trimmed beyond the position of the object. If you run "rbd journal status --image <image-name>", you should be able to see the minimum in-use set and the current active set for new journal entries:
$ rbd --cluster cluster1 journal status --image image1 minimum_set: 7 active_set: 8 registered clients: [id=, commit_position=[positions=[[object_number=33, tag_tid=2, entry_tid=49153], [object_number=32, tag_tid=2, entry_tid=49152], [object_number=35, tag_tid=2, entry_tid=49151], [object_number=34, tag_tid=2, entry_tid=49150]]], state=connected] [id=81672c30-d735-46d4-a30a-53c221954d0e, commit_position=[positions=[[object_number=30, tag_tid=2, entry_tid=48034], [object_number=29, tag_tid=2, entry_tid=48033], [object_number=28, tag_tid=2, entry_tid=48032], [object_number=31, tag_tid=2, entry_tid=48031]]], state=connected]
$ rados --cluster cluster1 --pool rbd ls | grep journal_data | sort journal_data.1.1029b4577f90.28 journal_data.1.1029b4577f90.29 journal_data.1.1029b4577f90.30 journal_data.1.1029b4577f90.31 journal_data.1.1029b4577f90.32 journal_data.1.1029b4577f90.33 journal_data.1.1029b4577f90.34 journal_data.1.1029b4577f90.35 <......>
$ rbd --cluster cluster1 journal status --image image1 minimum_set: 8 active_set: 8 registered clients: [id=, commit_position=[positions=[[object_number=33, tag_tid=2, entry_tid=49153], [object_number=32, tag_tid=2, entry_tid=49152], [object_number=35, tag_tid=2, entry_tid=49151], [object_number=34, tag_tid=2, entry_tid=49150]]], state=connected] [id=81672c30-d735-46d4-a30a-53c221954d0e, commit_position=[positions=[[object_number=33, tag_tid=2, entry_tid=49153], [object_number=32, tag_tid=2, entry_tid=49152], [object_number=35, tag_tid=2, entry_tid=49151], [object_number=34, tag_tid=2, entry_tid=49150]]], state=connected]
$ rados --cluster cluster1 --pool rbd ls | grep journal_data | sort journal_data.1.1029b4577f90.32 journal_data.1.1029b4577f90.33 journal_data.1.1029b4577f90.34 journal_data.1.1029b4577f90.35
-- Jason
-- Jason
|