You are getting login to same iscsi
server(ip address) using iscsi commands so both are connected to same shared
storage … Just mount from one node and create some
files on it …unmount from that node and mount it from other node and see From:
linux-cluster-bounces@xxxxxxxxxx [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of Singh Raina, Ajeet I rebooted all the machine and this time
it seems to work. But again getting stucked with something. I can see : # df -h /dev/sda1
2.8G 37M 2.6G 2% /newshare On both the machine. But Whenever I am creating any file on one
initiator it don’t get created on another.Why So? From:
linux-cluster-bounces@xxxxxxxxxx [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of When you mount the file system check using
df command if it is really mounted
or no .. Why don’t you just stop iscsi
service on both nodes and restart it again to do clean operation.. Please search in some other
forums also where you might get same information available already
.(do googling with whatever error messages what you are
getting) From:
linux-cluster-bounces@xxxxxxxxxx [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of Singh Raina, Ajeet Hai..I have successfully setup iSCSI
target and Initiator.I am able to : Create a partition and file system on
earlier raw partition. I mounted the partition as: #mount /dev/sda1 /newshare(mount point
mentioned on cluster tool > resources > filesystem. Provided e2label /dev/sda1 DATA But When I tried to restart the iscsi on
the next cluster node it showed me: Removing iscsi driver: ERROR: Module
iscsi_sfnet is in use Whats this error all about? Now its showing on both the node? From:
linux-cluster-bounces@xxxxxxxxxx [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of To dicover this volume
from both nodes, hopefully you are aware of these iscsi commands Just giving
examples
1) First discover if these volumes are visible
1) #
iscsiadm --mode discovery --type sendtargets --portal
10.1.40.222 (where 10.1.40.222 is IP address of
iscsi ) 10.1.40.2:3260,1 iqn.2007-06.com.unisys:prov 10.1.40.2:3260,1 iqn.2007-06.com.unisys:prov-goldilocks1 10.1.40.2:3260,1 iqn.2007-06.com.unisys:p3vmware 10.1.40.2:3260,1 iqn.2007-06.com.unisys:p2vmware
You can see it is showing prov, prov-goldilocks1,p3vmware,p2vmware
volumes [whichever is created] 2)Login to iscsi iscsiadm --mode node --targetname iqn.2007-06.com.unisys:prov --portal 10.1.40.222 .login
3)do cat /proc/partitions It should show
you /sd ** 4)mount that /dev/sd* to any of cluster [it
should allow you to mount from both nodes Just read some iscsi manuals and do this [withought
GUI you can do that ..Add new resource basically related to clustering resource
which automatically Mount your shared device when cluster manager is started
) So better configure it using iscsi commands and see whether
you can mount it from both nodes [then you can add a resource about
it] From:
linux-cluster-bounces@xxxxxxxxxx [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of Singh Raina, Ajeet Ya,I have now created /newshare directory
on the both scsi initiator machine(cluster nodes). I made the following entry thru
system-config-cluster: Resource >> Add New Resource
>> Filesystem Name : Sharedstorage Mount Point : /newshare Device : /dev/sda6 Option : Filesystem type : ext3 Saved the file and sent to the other
Cluster Nodes. Now What Next? How will I know if the Shared Storage is
seen through both the Cluster Nodes? Earlier I had a script called duoscript on
both the Cluster Nodes.What I had tested: I ran the script on both the cluster
nodes.I stopped few processes on one of node,suddenly other took the
responsibility. Now where should I put the script on
shared Storage(target)? Pls Help From:
linux-cluster-bounces@xxxxxxxxxx [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of Re:When I click on Resource >>
File System on Cluster Tool...It asked for Mount point, Device,
Option,Name,fil Create one directory as mount
point , Select any file system which you want to create in list ,you can choose
default file system ID there .. GUI will do the rest .. From:
linux-cluster-bounces@xxxxxxxxxx [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of Singh Raina, Ajeet Anyway, I am successful in setting Up iSCSI iniatiator and Target. What I did is Created a raw partition(unformatted ) on target machine
and restarted both the machine. I put : Lun 0 path=/dev/sda6 And That Did job for me. Now I can easily see: [root@BL01DL385 ~]# cat /proc/scsi/scsi Attached devices: Host: scsi1 Channel: 00 Id: 00 Lun: 00 Vendor: IET Model:
VIRTUAL-DISK Rev: 0 Type:
Direct-Access
ANSI SCSI revision: 04 The “Virtual DISk” Entry confirms that. Now I am making entry in #system-config-cluster and Want to know what exact entry I need to make
here: When I click on Resource >>
File System on Cluster Tool...It asked for Mount point, Device,
Option,Name,fil My machine address is 10.14.236.134. Path where Unformatted Partition made is
/dev/sda6 As for Now, I have only unformatted
partition?Do I need to format it? Pls Help From: Singh Raina,
Ajeet [root@BL02DL385 ~]# iscsi-ls ******************************************************************************* SFNet iSCSI Driver Version
...4:0.1.11-6(03-Aug-2007) ******************************************************************************* TARGET
NAME :
iqn.2008-07.com.logica.bl04mpdsk:storage.lun1 TARGET
ALIAS : HOST
ID
: 0 BUS
ID
: 0 TARGET
ID
: 0 TARGET
ADDRESS :
10.14.236.134:3260,1 SESSION
STATUS : ESTABLISHED AT
Wed Jul 9 12:22:50 IST 2008 SESSION
ID
: ISID 00023d000001 TSIH 100 ******************************************************************************* [root@BL02DL385 ~]# chkconfig iscsi on [root@BL02DL385 ~]# I guess it worked.Finally ISCSI Setup
Done. What is the next Step? Pls help From:
linux-cluster-bounces@xxxxxxxxxx [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of Singh Raina, Ajeet I followed as said in the doc and found it
this way: [root@BL02DL385 ~]# rpm -ivh
iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm warning:
iscsi-initiator-utils-4.0.3.0-6.x86_64.rpm: V3 DSA signature: NOKEY, key ID
9b3c94f4 Preparing...
########################################### [100%] 1:iscsi-initiator-utils
########################################### [100%] [root@BL02DL385 ~]# vi
/etc/iscsi.conf DiscoveryAddress=10.14.236.134 # OutgoingUsername=fred # OutgoingPassword=uhyt6h #
and/or # DiscoveryAddress=10.14.236.134 # IncomingUsername=mary # IncomingPassword=kdhjkd9l # [root@BL02DL385 ~]# service iscsi start Checking iscsi
config:
[ OK ] Loading iscsi
driver:
[ OK ] Starting
iscsid:
[ OK ] [root@BL02DL385 ~]# CD
/proc/scsi/scsi -bash: CD: command not found [root@BL02DL385 ~]# vi /proc/scsi/scsi It is Displaying so: Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor:
IET Model: VIRTUAL-DISK
Rev: 0 Type:
Direct-Access
ANSI SCSI revision: 04 ~ ~ Is it working fine? I will do run the same command sequence in
the other Cluster Node. Is it fine upto this point? What Next? From:
linux-cluster-bounces@xxxxxxxxxx [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of Singh Raina, Ajeet Great !!! I ran depmod and it ran well now. Thanks for the link anyway. From: linux-cluster-bounces@xxxxxxxxxx
[mailto:linux-cluster-bounces@xxxxxxxxxx] On
Behalf Of P, Prakash This is related to IET. Go through their
mailing list to find the solution. http://www.nabble.com/iSCSI-Enterprise-Target-f4401.html From:
linux-cluster-bounces@xxxxxxxxxx [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of Singh Raina, Ajeet I am Facing this Issue: [root@vjs iscsitarget]# service
iscsi-target restart Stoping iSCSI target
service:
[FAILED] Starting iSCSI target service: FATAL:
Module iscsi_trgt not found. netlink fd : Connection refused
[FAILED] Logs: /var/log/messages Jul 10 15:25:24 vjs ietd: nl_open -1 Jul 10 15:25:24 vjs ietd: netlink fd Jul 10 15:25:24 vjs ietd: : Connection
refused Jul 10 15:25:24 vjs iscsi-target: ietd
startup failed Any idea? I just did the following steps: [root@vjs ~]# mkdir cluster_share [root@vjs ~]# cd cluster_share/ [root@vjs cluster_share]# touch shared [root@vjs cluster_share]# cd [root@vjs ~]# mkdir /usr/src/iscsitarget [root@vjs ~]# cd /usr/src/ debug/
iscsitarget/ kernels/ redhat/ [root@vjs ~]# cd /usr/src/iscsitarget/ [root@vjs iscsitarget]# rpm -Uvh
/usr/src/redhat/RPMS/ noarch/ x86_64/ [root@vjs iscsitarget]# rpm -Uvh
/usr/src/redhat/RPMS/ noarch/ x86_64/ [root@vjs iscsitarget]# rpm -Uvh
/usr/src/redhat/RPMS/x86_64/iscsitarget- iscsitarget-0.4.12-6.x86_64.rpm
iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_64.rpm iscsitarget-debuginfo-0.4.12-6.x86_64.rpm [root@vjs iscsitarget]# rpm -Uvh
/usr/src/redhat/RPMS/x86_64/iscsitarget-0.4.12-6.x86_64.rpm /usr/src/redhat/RPMS/x86_64/iscsitarget-kernel-0.4.12-6_2.6.9_34.EL.x86_64.rpm Preparing...
########################################### [100%]
1:iscsitarget-kernel
########################################### [ 50%]
2:iscsitarget
########################################### [100%] [root@vjs iscsitarget]# chkconfig --add
iscsi-target [root@vjs iscsitarget]# chkconfig --level
2345 iscsi-target on [root@vjs iscsitarget]# vi /etc/ietd.conf Target iqn.2008-07.com.logica.vjs:storage.lun1
IncomingUser
OutgoingUser
Lun 0 Path=/root/cluster_share,Type=fileio
Alias iDISK0 I had created a cluster_share Folder
earlier.(Is it bocoz of Folder?)Doubt?? [root@vjs iscsitarget]# hostname vjs [root@vjs iscsitarget]# vi /etc/hosts [root@vjs iscsitarget]# hostname vjs [root@vjs iscsitarget]# vi /etc/hosts [root@vjs iscsitarget]# ping Usage: ping [-LRUbdfnqrvVaA] [-c count]
[-i interval] [-w deadline]
[-p pattern] [-s packetsize] [-t ttl] [-I interface or address]
[-M mtu discovery hint] [-S sndbuf]
[ -T timestamp option ] [ -Q tos ] [hop1 ...] destination [root@vjs iscsitarget]# vjs bash: vjs: command not found [root@vjs iscsitarget]# ping vjs 64 bytes from vjs.logica.com
(10.14.236.134): icmp_seq=0 ttl=64 time=0.053 ms 64 bytes from vjs.logica.com
(10.14.236.134): icmp_seq=1 ttl=64 time=0.033 ms 64 bytes from vjs.logica.com
(10.14.236.134): icmp_seq=2 ttl=64 time=0.029 ms --- vjs.logica.com ping statistics --- 3 packets transmitted, 3 received, 0%
packet loss, time 1999ms rtt min/avg/max/mdev =
0.029/0.038/0.053/0.011 ms, pipe 2 [root@vjs iscsitarget]# ping
vjs.logica.com 64 bytes from vjs.logica.com
(10.14.236.134): icmp_seq=0 ttl=64 time=0.026 ms 64 bytes from vjs.logica.com
(10.14.236.134): icmp_seq=1 ttl=64 time=0.030 ms --- vjs.logica.com ping statistics --- 2 packets transmitted, 2 received, 0%
packet loss, time 999ms rtt min/avg/max/mdev =
0.026/0.028/0.030/0.002 ms, pipe 2 [root@vjs iscsitarget]# vi /etc/ietd.conf [root@vjs iscsitarget]# service
iscsi-target restart Stoping iSCSI target
service:
[FAILED] Starting iSCSI target service: FATAL:
Module iscsi_trgt not found. netlink fd : Connection refused
[FAILED] [root@vjs iscsitarget]# [root@vjs iscsitarget]# From: linux-cluster-bounces@xxxxxxxxxx
[mailto:linux-cluster-bounces@xxxxxxxxxx] On
Behalf Of Singh Raina, Ajeet So I have the following Entry at my
ietd.conf file: # iscsi target configuration
Target iqn.2008-10.com.logical.pe:storage.lun1 IncomingUser OutgoingUser Lun 0 Path=/home/vjs/sharess,Type=fileio Alias iDISK0 #MaxConnections 6
Is above Entry Correct? My machine Hostname is pe.logical.com. Little confused about storage.lun1 whats
that? I have now not included any incoming or
outgoing user?Its open for all. What About Alias Entry? Ok After this entry being made, I have confusion
on client side too. The Doc says You need to make Entry on
/etc/iscsi.conf file as: # simple iscsi.conf
DiscoveryAddress=172.30.0.28 OutgoingUserName=gfs OutgoingPassword=secretsecret LoginTimeout=15 DiscoveryAddress=172.30.0.28
What’s the above entry means?IP?? As for My Setup I am setting up RHEL 4.0
machine with IP 10.14.236.134 as Target Machine and The two Nodes 10.14.236.106
and 10.14.236 108 as Already been in Cluster Nodes. Thanks for Helping me out. But You need to
also Help me What Entry in Cluster.conf I need to make after these things being
completed? From:
linux-cluster-bounces@xxxxxxxxxx [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of P, Prakash From:
linux-cluster-bounces@xxxxxxxxxx [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of Singh Raina, Ajeet Shall I need to mention Lun 0 ? is it
needed? Yes, of course it’s needed From:
linux-cluster-bounces@xxxxxxxxxx [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of P, Prakash From:
linux-cluster-bounces@xxxxxxxxxx [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of Singh Raina, Ajeet I want to setup iSCSI as I am running short of Shared
Storage. In one of the Doc http://mail.digicola.com/wiki/index.php?title=User:Martin:iSCSI
it says that : [doc]
Install
the Target
1.
Install RHEL4, I used kickstart with just "@ base" for packages. Configure
the system with two drives sda and sdb or create two logical volumes(lvm). The
first disk is for the OS and the second for the iSCSI storage [/doc] My Hard Disk Partition says: [code] [root@vjs
~]# fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot
Start
End Blocks Id System /dev/sda1
*
1
13 104391 83 Linux /dev/sda2
14 9729
78043770 8e Linux LVM [/code] [code] # This file is edited by fstab-sync - see 'man fstab-sync'
for details /dev/VolGroup00/LogVol00
/
ext3 defaults 1 1 LABEL=/boot
/boot
ext3 defaults 1 2 /dev/VolGroup00/LogVol02
/data
ext3 defaults 1 2 none
/dev/pts
devpts gid=5,mode=620 0 0 none
/dev/shm
tmpfs defaults 0 0 none
/proc
proc defaults 0 0 none
/sys
sysfs defaults 0 0 #/dev/dvd
/mnt/dvd
auto defaults,exec,noauto,enaged 0 0 /dev/hda
/media/cdrom
pamconsole,exec,noauto,managed 0 0 /dev/VolGroup00/LogVol01
swap
swap defaults 0 0 [/code] Since I need to make entry on: iscsi target configuration
Target iqn.2000-12.com.digicola:storage.lun1 IncomingUser gfs secretsecret OutgoingUser Lun 0 Path=/dev/sdb,Type=fileio Alias iDISK0 #MaxConnections 6
In /etc/ietd.conf
Should I need to make separate partition or mention ??? under Lun 0 path=??? Entry?
If you wish you can create a separate partition. Else create a file & give the full path of the file. [e.g path=/home/test/target_file]
Pls Help
|
-- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster