Re: RBD image repurpose between iSCSI and QEMU VM, how to do properly ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jul 2, 2018 at 5:23 AM Wladimir Mutel <mwg@xxxxxxxxx> wrote:
 Dear all,

 I am doing more experiments with Ceph iSCSI gateway and I am a bit confused on how to properly repurpose an RBD image from iSCSI target into QEMU virtual disk and back

 First, I create an RBD image and set it up as iSCSI backstore in gwcli, specifying its size exactly to avoid unwanted resizes
 Next, I connect Windows 2008 R2 to this image (enable MPIO before connect and select MPIO policy 'Failover only' for the accessed device)
 Then in Windows Disk Management I initialize the physical disk with GPT, convert it into Dynamic disk and create a simple NTFS volume in its free space
 Then in the same console I put the disk 'offline', and in iSCSI control panel I disconnect the session from Windows side

 Then I attach the same RBD image to QEMU/KVM virtual machine with Ubuntu 18.04 as virtio/librbd storage drive
 Then I boot Ubuntu 18.04 VM, find NTFS filesystem using 'ldmtool create all', and during ntfsclone from external disk I discover that RBD image is mapped read-only
 Ok, I stop Ubuntu VM, do 'rbd lock rm' for this image (lock is held by tcmu-runner, I suppose), restart Ubuntu, restart ntfsclone, and this time it is going well.
 Btw, ntfsclone onto device-mapper target created by ldmtool is going about 2x faster than directly onto Virtio Disk (vdN), so it transferred my 1600+GB in just 13+ hours instead of 27+

 Ok, external NTFS is cloned seemingly well, I shutdown Ubuntu VM (it properly removed the RBD lock on shutdown) and try to access it from Windows by iSCSI again.
 And at this moment I stumble into trouble. First, I don't see added RBD image in 'Devices' on iSCSI initiator control panel. This I tried to resolve by restarting tcmu-runner.
 After reconnect from Windows side, RBD image became visible in devices (and RBD lock from tcmu side was reacquired),
 but its MPIO button was disabled, so I could not check or change MPIO policy (surely I enable MPIO in 'Connect' dialog).
 I tried also to restart rbd-target-gw but this also did not help. Restarting Windows server also did not improve the situation (MPIO button still disabled).
 What should I try to restart next, to avoid restarting the whole Ceph host ? May be unload/reload some kernel modules ?

 Thanks in advance for your help. Hope I could determine and resolve the problem myself, but this could take more time than getting help from you.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

This isn't really a use case that we support nor intend to support. Your best bet would be to use an initiator in your linux host to connect to the same LUN as is being exported over iSCSI (just make sure the NTFS file system is quiesced / frozen.

--
Jason
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux