Hey Rich!
Appreciate the info. This did work successfully! Just wanted to share my
experience in case others run into a similar situation:
First step, I disabled the tcmu-runner process on all 3 of our previous
iSCSI gateway nodes. Then from our MONs, I confirmed there were no
current locks on the iSCSI RBD pool:
rbd lock ls --pool iscsi <diskname> (Should return empty)
Once completed, I added a new user to CEPH with rwx permissions to the
iSCSI pool:
ceph auth get-or-create client.CLIENTNAME mon 'allow profile rbd, allow
r' osd 'allow class-read object_prefix rbd_children, allow rwx
pool=iscsi' -o rbd.keyring
I then installed ceph-common on our Linux hosts where the drive was to
be mounted, created the CEPH keyring and config files, and was able to
do an "rbd ls" of the iSCSI pool:
rbd ls iscsi --name client.CLIENTNAME (When using native RBD, make sure
you specify the --name of your keyring file, otherwise you get strange
authentication errors)
The ls command showed my former iSCSI disks present. Now the scary part,
mounting it to the system. I was able to run the rbdmap command
successfully:
rbd map <diskname> --pool iscsi --name client.CLIENTNAME
The former iSCSI disk now appeared on my system at /dev/rbd0 and I was
able to mount the filesystem successfully. Hopefully these instructions
can help someone else along the way.
Thanks!
Justin Goetz
Systems Engineer, TeraSwitch Inc.
jgoetz@xxxxxxxxxxxxxx
412-945-7045 (NOC) | 412-459-7945 (Direct)
On 3/16/21 5:42 PM, Richard Bade wrote:
Hi Justin,
I did some testing with iscsi a year or so ago. It was just using
standard rbd images in the backend so yes I think your theory of
stopping iscsi to release the locks and then providing access to the
rbd image would work.
Rich
On Wed, 17 Mar 2021 at 09:53, Justin Goetz <jgoetz@xxxxxxxxxxxxxx> wrote:
Hello!
I was hoping to inquire if anyone here has attempted similar operations,
and if they ran into any issues. To give a brief overview of my
situation, I have a standard octopus cluster running 15.2.2, with
ceph-iscsi installed via ansible. The original scope of a project we
were working on changed, and we no longer need the iSCSI overhead added
to the project (the machine using CEPH is Linux, so we would like to use
native RBD block devices instead).
Ideally we would create some new pools and migrate the data from the
iSCSI pools over to the new pools, however, due to the massive amount of
data (close to 200 TB), we lack the physical resources necessary to copy
the files.
Digging a bit on the backend of the pools utilized by ceph-iscsi, it
appears that the iSCSI utility uses standard RBD images on the actual
backend:
~]# rbd info iscsi/pool-name
rbd image 'pool-name':
size 200 TiB in 52428800 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 137b45a37ad84a
block_name_prefix: rbd_data.137b45a37ad84a
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
op_features:
flags: object map invalid, fast diff invalid
create_timestamp: Thu Nov 12 16:14:31 2020
access_timestamp: Tue Mar 16 16:13:41 2021
modify_timestamp: Tue Mar 16 16:15:36 2021
And I can also see that, like a standard rbd image, our 1st iSCSI
gateway currently holds the lock on the image:
]# rbd lock ls --pool iscsi pool-name
There is 1 exclusive lock on this image.
Locker ID Address
client.3618592 auto 259361792 10.101.12.61:0/1613659642
Theoretically speaking, would I be able to simply stop & disable the
tcmu-runner processes on all iSCSI gateways in our cluster, which would
release the lock on the RBD image, then create another user with rwx
permissions to the iscsi pool? Would this work, or am I missing
something that would come back to bite me later on?
Looking for any advice on this topic. Thanks in advance for reading!
--
Justin Goetz
Systems Engineer, TeraSwitch Inc.
jgoetz@xxxxxxxxxxxxxx
412-945-7045 (NOC) | 412-459-7945 (Direct)
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx