Is there any workaround that you can think of to correctly enable journaling on locked images?
From: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx>
On Behalf Of Glen Baars
Sent: Tuesday, 14 August 2018 9:36 PM
To: dillaman@xxxxxxxxxx
Cc: ceph-users <ceph-users@xxxxxxxxxxxxxx>
Subject: Re: [ceph-users] RBD journal feature
Hello Jason,
Thanks for your help. Here is the output you asked for also.
https://pastebin.com/dKH6mpwk
Kind regards,
Glen Baars
From: Jason Dillaman <jdillama@xxxxxxxxxx>
Sent: Tuesday, 14 August 2018 9:33 PM
To: Glen Baars <glen@xxxxxxxxxxxxxxxxxxxxxx>
Cc: ceph-users <ceph-users@xxxxxxxxxxxxxx>
Subject: Re: [ceph-users] RBD journal feature
Hello Jason,
I have now narrowed it down.
If the image has an exclusive lock – the journal doesn’t go on the correct pool.
OK, that makes sense. If you have an active client on the image holding the lock, the request to enable journaling is sent over to that client but it's missing all the journal options. I'll
open a tracker ticket to fix the issue.
Kind regards,
Glen Baars
From: Jason Dillaman <jdillama@xxxxxxxxxx>
Sent: Tuesday, 14 August 2018 9:29 PM
To: Glen Baars <glen@xxxxxxxxxxxxxxxxxxxxxx>
Cc: ceph-users <ceph-users@xxxxxxxxxxxxxx>
Subject: Re: [ceph-users] RBD journal feature
Hello Jason,
I have tried with and without ‘rbd journal pool = rbd’ in the ceph.conf. it doesn’t seem to make a difference.
It should be SSDPOOL, but regardless, I am at a loss as to why it's not working for you. You can try appending "--debug-rbd=20" to the end of the "rbd feature enable" command and
provide the generated logs in a pastebin link.
Also, here is the output:
rbd image-meta list RBD-HDD/2ef34a96-27e0-4ae7-9888-fd33c38f657a
There are 0 metadata on this image.
Kind regards,
Glen Baars
From: Jason Dillaman <jdillama@xxxxxxxxxx>
Sent: Tuesday, 14 August 2018 9:00 PM
To: Glen Baars <glen@xxxxxxxxxxxxxxxxxxxxxx>
Cc: dillaman <dillaman@xxxxxxxxxx>; ceph-users <ceph-users@xxxxxxxxxxxxxx>
Subject: Re: [ceph-users] RBD journal feature
I tried w/ a rbd CLI from 12.2.7 and I still don't have an issue enabling journaling on a different pool:
size 1024 MB in 256 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.101e6b8b4567
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
create_timestamp: Tue Aug 14 08:51:19 2018
$ rbd feature enable rbd/foo journaling --journal-pool rbd_ssd
$ rbd journal info --pool rbd --image foo
rbd journal '101e6b8b4567':
header_oid: journal.101e6b8b4567
object_oid_prefix: journal_data.1.101e6b8b4567.
order: 24 (16384 kB objects)
Can you please run "rbd image-meta list <image-spec>" to see if you are overwriting any configuration settings? Do you have any client configuration overrides in your "/etc/ceph/ceph.conf"?
Hello Jason,
I will also complete testing of a few combinations tomorrow to try and isolate the issue now that we can get it to work with a new image.
The cluster started out at 12.2.3 bluestore so there shouldn’t be any old issues from previous versions.
Kind regards,
Glen Baars
From: Jason Dillaman <jdillama@xxxxxxxxxx>
Sent: Tuesday, 14 August 2018 7:43 PM
To: Glen Baars <glen@xxxxxxxxxxxxxxxxxxxxxx>
Cc: dillaman <dillaman@xxxxxxxxxx>; ceph-users <ceph-users@xxxxxxxxxxxxxx>
Subject: Re: [ceph-users] RBD journal feature
Hello Jason,
I can confirm that your tests work on our cluster with a newly created image.
We still can’t get the current images to use a different object pool. Do you think that maybe another feature is incompatible with this feature? Below is a log
of the issue.
I wouldn't think so. I used master branch for my testing but I'll try 12.2.7 just in case it's an issue that's only in the luminous release.
:~# rbd info RBD_HDD/2ef34a96-27e0-4ae7-9888-fd33c38f657a
rbd image '2ef34a96-27e0-4ae7-9888-fd33c38f657a':
size 51200 MB in 12800 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.37c8974b0dc51
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
flags:
create_timestamp: Sat May 5 11:39:07 2018
:~# rbd journal info --pool RBD_HDD --image 2ef34a96-27e0-4ae7-9888-fd33c38f657a
rbd: journaling is not enabled for image 2ef34a96-27e0-4ae7-9888-fd33c38f657a
:~# rbd feature enable RBD_HDD/2ef34a96-27e0-4ae7-9888-fd33c38f657a journaling --journal-pool RBD_SSD
:~# rbd journal info --pool RBD_HDD --image 2ef34a96-27e0-4ae7-9888-fd33c38f657a
rbd journal '37c8974b0dc51':
header_oid: journal.37c8974b0dc51
object_oid_prefix: journal_data.1.37c8974b0dc51.
order: 24 (16384 kB objects)
splay_width: 4
***************<NOTE NO object_pool> ****************
:~# rbd info RBD_HDD/2ef34a96-27e0-4ae7-9888-fd33c38f657a
rbd image '2ef34a96-27e0-4ae7-9888-fd33c38f657a':
size 51200 MB in 12800 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.37c8974b0dc51
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten, journaling
flags:
create_timestamp: Sat May 5 11:39:07 2018
journal: 37c8974b0dc51
mirroring state: disabled
Kind regards,
Glen Baars
From: Jason Dillaman <jdillama@xxxxxxxxxx>
Sent: Tuesday, 14 August 2018 12:04 AM
To: Glen Baars <glen@xxxxxxxxxxxxxxxxxxxxxx>
Cc: dillaman <dillaman@xxxxxxxxxx>; ceph-users <ceph-users@xxxxxxxxxxxxxx>
Subject: Re: [ceph-users] RBD journal feature
Hello Jason,
Interesting, I used ‘rados ls’ to view the SSDPOOL and can’t see any objects. Is this the correct way to view the journal objects?
You won't see any journal objects in the SSDPOOL until you issue a write:
$ rbd create --size 1G --image-feature exclusive-lock rbd_hdd/test
$ rbd bench --io-type=write --io-pattern=rand --io-size=4K --io-total=16M rbd_hdd/test --rbd-cache=false
bench type write io_size 4096 io_threads 16 bytes 16777216 pattern random
SEC OPS OPS/SEC BYTES/SEC
10 3520 356.04 1458356.70
11 3920 361.34 1480050.97
elapsed: 11 ops: 4096 ops/sec: 353.61 bytes/sec: 1448392.06
$ rbd feature enable rbd_hdd/test journaling --journal-pool rbd_ssd
$ rbd journal info --pool rbd_hdd --image test
rbd journal '10746b8b4567':
header_oid: journal.10746b8b4567
object_oid_prefix: journal_data.2.10746b8b4567.
order: 24 (16 MiB objects)
$ rbd bench --io-type=write --io-pattern=rand --io-size=4K --io-total=16M rbd_hdd/test --rbd-cache=false
bench type write io_size 4096 io_threads 16 bytes 16777216 pattern random
SEC OPS OPS/SEC BYTES/SEC
elapsed: 16 ops: 4096 ops/sec: 245.04 bytes/sec: 1003692.81
$ rados -p rbd_ssd ls | grep journal_data.2.10746b8b4567.
journal_data.2.10746b8b4567.3
journal_data.2.10746b8b4567.0
journal_data.2.10746b8b4567.2
journal_data.2.10746b8b4567.1
rbd feature enable SLOWPOOL/RBDImage journaling --journal-pool SSDPOOL
The symptoms that we are experiencing is a huge decrease in write speed ( 1QD 128K writes from 160MB/s down to 14MB/s ). We see no improvement when moving the
journal to SSDPOOL ( but we don’t think it is really moving )
If you are trying to optimize for 128KiB writes, you might need to tweak the "rbd_journal_max_payload_bytes" setting since it currently is defaulted to split journal write events
into a maximum of 16KiB payload [1] in order to optimize the worst-case memory usage of the rbd-mirror daemon for environments w/ hundreds or thousands of replicated images.
Kind regards,
Glen Baars
From: Jason Dillaman <jdillama@xxxxxxxxxx>
Sent: Saturday, 11 August 2018 11:28 PM
To: Glen Baars <glen@xxxxxxxxxxxxxxxxxxxxxx>
Cc: ceph-users <ceph-users@xxxxxxxxxxxxxx>
Subject: Re: [ceph-users] RBD journal feature
Hello Ceph Users,
I am trying to implement image journals for our RBD images ( required for mirroring )
rbd feature enable SLOWPOOL/RBDImage journaling --journal-pool SSDPOOL
When we run the above command we still find the journal on the SLOWPOOL and not on the SSDPOOL. We are running 12.2.7 and all bluestore. We have also tried the
ceph.conf option (rbd journal pool = SSDPOOL )
Has anyone else gotten this working?
The journal header was on SLOWPOOL or the journal data objects? I would expect that the journal metadata header is located on SLOWPOOL but all data objects should be created on
SSDPOOL as needed.
This e-mail is intended solely for the benefit of the addressee(s) and any other named recipient. It is confidential and may contain legally privileged or confidential information.
If you are not the recipient, any use, distribution, disclosure or copying of this e-mail is prohibited. The confidentiality and legal privilege attached to this communication is not waived or lost by reason of the mistaken transmission or delivery to you.
If you have received this e-mail in error, please notify us immediately.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
This e-mail is intended solely for the benefit of the addressee(s) and any other named recipient. It is confidential and may contain legally privileged or confidential information.
If you are not the recipient, any use, distribution, disclosure or copying of this e-mail is prohibited. The confidentiality and legal privilege attached to this communication is not waived or lost by reason of the mistaken transmission or delivery to you.
If you have received this e-mail in error, please notify us immediately.
--
This e-mail is intended solely for the benefit of the addressee(s) and any other named recipient. It is confidential and may contain legally privileged or confidential information.
If you are not the recipient, any use, distribution, disclosure or copying of this e-mail is prohibited. The confidentiality and legal privilege attached to this communication is not waived or lost by reason of the mistaken transmission or delivery to you.
If you have received this e-mail in error, please notify us immediately.
--
This e-mail is intended solely for the benefit of the addressee(s) and any other named recipient. It is confidential and may contain legally privileged or confidential information.
If you are not the recipient, any use, distribution, disclosure or copying of this e-mail is prohibited. The confidentiality and legal privilege attached to this communication is not waived or lost by reason of the mistaken transmission or delivery to you.
If you have received this e-mail in error, please notify us immediately.
--
This e-mail is intended solely for the benefit of the addressee(s) and any other named recipient. It is confidential and may contain legally privileged or confidential information.
If you are not the recipient, any use, distribution, disclosure or copying of this e-mail is prohibited. The confidentiality and legal privilege attached to this communication is not waived or lost by reason of the mistaken transmission or delivery to you.
If you have received this e-mail in error, please notify us immediately.
--
This e-mail is intended solely for the benefit of the addressee(s) and any other named recipient. It is confidential and may contain legally privileged or confidential information. If you are not the recipient, any use, distribution, disclosure
or copying of this e-mail is prohibited. The confidentiality and legal privilege attached to this communication is not waived or lost by reason of the mistaken transmission or delivery to you. If you have received this e-mail in error, please notify us immediately.
--
This e-mail is intended solely for the benefit of the addressee(s) and any other named recipient. It is confidential and may contain legally privileged or confidential information. If you are not the recipient, any use, distribution, disclosure
or copying of this e-mail is prohibited. The confidentiality and legal privilege attached to this communication is not waived or lost by reason of the mistaken transmission or delivery to you. If you have received this e-mail in error, please notify us immediately.
This e-mail is intended solely for the benefit of the addressee(s) and any other named recipient. It is confidential and may contain legally privileged or confidential information. If you are not the recipient, any use, distribution, disclosure or copying of
this e-mail is prohibited. The confidentiality and legal privilege attached to this communication is not waived or lost by reason of the mistaken transmission or delivery to you. If you have received this e-mail in error, please notify us immediately.
|