Re: Question about migrating from iSCSI to RBD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Justin,
I did some testing with iscsi a year or so ago. It was just using
standard rbd images in the backend so yes I think your theory of
stopping iscsi to release the locks and then providing access to the
rbd image would work.

Rich

On Wed, 17 Mar 2021 at 09:53, Justin Goetz <jgoetz@xxxxxxxxxxxxxx> wrote:
>
> Hello!
>
> I was hoping to inquire if anyone here has attempted similar operations,
> and if they ran into any issues. To give a brief overview of my
> situation, I have a standard octopus cluster running 15.2.2, with
> ceph-iscsi installed via ansible. The original scope of a project we
> were working on changed, and we no longer need the iSCSI overhead added
> to the project (the machine using CEPH is Linux, so we would like to use
> native RBD block devices instead).
>
> Ideally we would create some new pools and migrate the data from the
> iSCSI pools over to the new pools, however, due to the massive amount of
> data (close to 200 TB), we lack the physical resources necessary to copy
> the files.
>
> Digging a bit on the backend of the pools utilized by ceph-iscsi, it
> appears that the iSCSI utility uses standard RBD images on the actual
> backend:
>
> ~]# rbd info iscsi/pool-name
> rbd image 'pool-name':
>      size 200 TiB in 52428800 objects
>      order 22 (4 MiB objects)
>      snapshot_count: 0
>      id: 137b45a37ad84a
>      block_name_prefix: rbd_data.137b45a37ad84a
>      format: 2
>      features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
>      op_features:
>      flags: object map invalid, fast diff invalid
>      create_timestamp: Thu Nov 12 16:14:31 2020
>      access_timestamp: Tue Mar 16 16:13:41 2021
>      modify_timestamp: Tue Mar 16 16:15:36 2021
>
> And I can also see that, like a standard rbd image, our 1st iSCSI
> gateway currently holds the lock on the image:
>
> ]# rbd lock ls --pool iscsi pool-name
> There is 1 exclusive lock on this image.
> Locker          ID              Address
> client.3618592  auto 259361792  10.101.12.61:0/1613659642
>
> Theoretically speaking, would I be able to simply stop & disable the
> tcmu-runner processes on all iSCSI gateways in our cluster, which would
> release the lock on the RBD image, then create another user with rwx
> permissions to the iscsi pool? Would this work, or am I missing
> something that would come back to bite me later on?
>
> Looking for any advice on this topic. Thanks in advance for reading!
>
> --
>
> Justin Goetz
> Systems Engineer, TeraSwitch Inc.
> jgoetz@xxxxxxxxxxxxxx
> 412-945-7045 (NOC) | 412-459-7945 (Direct)
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux