Re: iSCSI to a Ceph node with 2 network adapters - how to ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



	Hi everyone again,

	I continue set up of my testing Ceph cluster (1-node so far).
	I changed 'chooseleaf' from 'host' to 'osd' in CRUSH map
	to make it run healthy on 1 node. For the same purpose,
	I also set 'minimum_gateways = 1' for Ceph iSCSI gateway.
	Also I upgraded Ubuntu 18.04 kernel to mainline v4.17 to get
	up-to-date iSCSI attributes support required by gwcli
	(qfull_time_out and probably something else).

	I was able to add client host IQNs and configure their CHAP
	authentication. I was able to add iSCSI LUNs referring to RBD
	images, and to assign LUNs to clients. 'gwcli ls /' and
	'targetcli ls /' show nice diagrams without signs of errors.
	iSCSI initiators from Windows 10 and 2008 R2 can log in to the
	portal with CHAP auth and list their assigned LUNs.
	And authenticated sessions are also shown in '*cli ls' printout

	But:

	in Windows disk management, mapped LUN is shown in 'offline'
	state. When I try to bring it online or to initalize the disk
	with MBR or GPT partition table, I get messages like
	'device not ready' on Win10 or 'driver detected controller error
	 on \device\harddisk\dr5' or the like.

	So, my usual question is - where to look and what logs to enable
	to find out what is going wrong ?

	My setup specifics are that I create my RBDs in non-default pool
	('libvirt' instead of 'rbd'). Also I create them with erasure
	data-pool (called it 'jerasure21' as was configured in default
	erasure profile). Should I add explicit access to these pools
	to some Ceph client I don't know ? I know that 'gwcli'
	logs into Ceph as 'client.admin' but I am not sure
	about tcmu-runner and/or user:rbd backstore provider.

	Thank you in advance for your useful directions
	out of my problem.

Wladimir Mutel wrote:

Failed : disk create/update failed on p10s. LUN allocation failure

	Well, this was fixed by updating kernel to v4.17 from Ubuntu kernel/mainline PPA
	Going on...
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux