Thanks Jason, works perfectly. Do you know if ceph blocks the client IO until the journal has acknowledged it's write? I.E can I store my journal on slower disks or will that have a negative impact on performance? Is there perhaps a hole in the documentation here? I've not been able to find anything in the man page for RBD nor on the Ceph website? Regards, Cory -----Original Message----- From: Jason Dillaman [mailto:jdillama@xxxxxxxxxx] Sent: Tuesday, 11 October 2016 7:57 AM To: Cory Hawkless <Cory@xxxxxxxxxxxxxx> Cc: ceph-users@xxxxxxxxxxxxxx Subject: Re: RBD-Mirror - Journal location Yes, the "journal_data" objects can be stored in a separate pool from the image. The rbd CLI allows you to use the "--journal-pool" argument when creating, copying, cloning, or importing and image with journaling enabled. You can also specify the journal data pool when dynamically enabling the journaling feature using the same argument. Finally, there is a Ceph config setting of "rbd journal pool = XYZ" that allows you to default new journals to a specific pool. Jason On Mon, Oct 10, 2016 at 1:59 AM, Cory Hawkless <Cory@xxxxxxxxxxxxxx> wrote: > I’ve enabled RBD mirroring on my test clusters and it seems to be > working well, my question is ‘Can we store the RBD mirror journal on a > different pool?’ > > > > Currently when I do something like rados ls –p sas I see > > > > > > rbd_data.a67d02eb141f2.0000000000000bd1 > > rbd_data.a67d02eb141f2.0000000000000b73 > > rbd_data.a67d02eb141f2.000000000000036d > > rbd_data.a67d02eb141f2.000000000000074e > > journal_data.75.a67d02eb141f2.175 > > rbd_data.a67d02eb141f2.0000000000000bb6 > > rbd_data.a67d02eb141f2.0000000000000bae > > rbd_data.a67d02eb141f2.0000000000000313 > > rbd_data.a67d02eb141f2.0000000000000bb3 > > > > > > Depending on how far behind the remote cluster is on sync, there are > more or less of the journal entries. > > > > I am worried about the overhead of storing the journal on the same set > of disks as the actual RBD images. > > My understanding is that enabling journaling is going to double the > IOPS on the disks, is that correct? > > > > Any assistance appreciated > > > > Regards, > > Cory > > > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- Jason _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com