required Vol 41, Issue 14

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




-----Original Message-----
From: linux-cluster-bounces@xxxxxxxxxx
[mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of
linux-cluster-request@xxxxxxxxxx
Sent: Wednesday, September 12, 2007 9:30 PM
To: linux-cluster@xxxxxxxxxx
Subject: Linux-cluster Digest, Vol 41, Issue 14

Send Linux-cluster mailing list submissions to
	linux-cluster@xxxxxxxxxx

To subscribe or unsubscribe via the World Wide Web, visit
	https://www.redhat.com/mailman/listinfo/linux-cluster
or, via email, send a message with subject or body 'help' to
	linux-cluster-request@xxxxxxxxxx

You can reach the person managing the list at
	linux-cluster-owner@xxxxxxxxxx

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Linux-cluster digest..."


Today's Topics:

   1. GNBD Problems loading module (notol Perc)
   2. fence_scsi agent on RHEL 4.5 (Sadek, Abdel)
   3. changing configuration (Joel Becker)
   4. RHEL4.5, GFS and selinux, are they playing nice? (Roger Pe?a)
   5. Re: RE: qdisk votes not in cman (Alain Richard)
   6. Services timeout (Jordi Prats)
   7. Re: DLM - Lock Value Block error (Patrick Caulfield)


----------------------------------------------------------------------

Message: 1
Date: Tue, 11 Sep 2007 19:32:07 +0000
From: "notol Perc" <furor_hater@xxxxxxxxxxx>
Subject:  GNBD Problems loading module
To: linux-cluster@xxxxxxxxxx
Message-ID: <BAY121-F37184D72844EF4310B3C3286C10@xxxxxxx>
Content-Type: text/plain; format=flowed

Using the latest CVS Cluster Source (09-11-2007) I have configured a cluster

on kernel 2.6.23-rc5 (running under Debian Etch)

I can get everything running short of importing GNBD due to the fact that I 
can not find the kernal module.



I can directly make cluster/gnbd-kernel/src/ I get the following:

make -C /usr/src/linux-2.6.23-rc5 M=/usr/src/cluster/gnbd-kernel/src 
symverfile=/usr/src/linux-2.6.23-rc5/Module.symvers modules USING_KBUILD=yes
make[1]: Entering directory `/usr/src/linux-2.6.23-rc5'
  Building modules, stage 2.
  MODPOST 1 modules
make[1]: Leaving directory `/usr/src/linux-2.6.23-rc5'

then make install

make -C /usr/src/linux-2.6.23-rc5 M=/usr/src/cluster/gnbd-kernel/src 
symverfile=/usr/src/linux-2.6.23-rc5/Module.symvers modules USING_KBUILD=yes
make[1]: Entering directory `/usr/src/linux-2.6.23-rc5'
  Building modules, stage 2.
  MODPOST 1 modules
make[1]: Leaving directory `/usr/src/linux-2.6.23-rc5'
install -d /usr/include/linux
install gnbd.h /usr/include/linux
install -d /lib/modules/`uname -r`/kernel/drivers/block/gnbd
install gnbd.ko /lib/modules/`uname -r`/kernel/drivers/block/gnbd

Ca some one pleas help be get this going?

_________________________________________________________________
Get a FREE small business Web site and more from Microsoft. Office Live! 
http://clk.atdmt.com/MRT/go/aub0930003811mrt/direct/01/



------------------------------

Message: 2
Date: Tue, 11 Sep 2007 15:27:16 -0600
From: "Sadek, Abdel" <Abdel.Sadek@xxxxxxx>
Subject:  fence_scsi agent on RHEL 4.5
To: <Linux-cluster@xxxxxxxxxx>
Message-ID:
	<C776378855970A4DADE4A476447F6391DEFB64@xxxxxxxxxxxxxxxxxxx>
Content-Type: text/plain; charset="us-ascii"

I am running a 2-node cluster with RHEL 4.5 Native cluster. I am using
scsi persistent reservation as my fencing device. I have noticed when I
shutdown one of the nodes, the fence_scsi agent on the surviving node
fails to fence the dying node. I get the following message:
Sep 11 16:18:13 troy fenced[3614]: agent "fence_scsi" reports: parse
error: unknown option "nodename=porsche"
Sep 11 16:18:13 troy fenced[3614]: fence "porsche" failed
 
it looks like the fence_scsi command is executed using with the nodename
parameter instead of the -n option.
when I run fence_scsi  -h I get the following (there is no nodename
parameter)
Usage
fence_scsi [options]
Options
  -n <node>        IP address or hostname of node to fence
  -h               usage
  -V               version
  -v               verbose
 
But the man page of the fence_scsi command talks about using both the
"-n" and "nodename=" options.
So, how do I make the fence_scsi run with the -n instead of the
nodename= option?
 
Thanks.
Abdel...
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
https://www.redhat.com/archives/linux-cluster/attachments/20070911/068386c2/
attachment.html

------------------------------

Message: 3
Date: Tue, 11 Sep 2007 16:46:08 -0700
From: Joel Becker <Joel.Becker@xxxxxxxxxx>
Subject:  changing configuration
To: linux-cluster@xxxxxxxxxx
Message-ID: <20070911234607.GD27482@xxxxxxxxxx>
Content-Type: text/plain; charset=us-ascii

Hey everyone,
	How do I update the IP addresses of existing nodes?
	I have a simple cluster.  I had two nodes on a private network
(10.x.x.x).  I decided to add two more nodes, but they are only on the
public network.  So I wanted to add them as well as change the existing
nodes to use the public network.
	I shut down cman/ccs on all nodes.  I edited cluster.conf.  I
started cman back on one node, and I ensured that cman_tool went to the
new version of the config via "cman_tool version -r N+1".
	The problem is that it still appears to be using the private
network addresses.  I see this in the log and with "cman_tool nodes -a".
	What can I do to fix this, short of hunting down all cman and
openais droppings and removing them?  I want the "right" way :-)

Joel

-- 

"To fall in love is to create a religion that has a fallible god."
        -Jorge Luis Borges

Joel Becker
Principal Software Developer
Oracle
E-mail: joel.becker@xxxxxxxxxx
Phone: (650) 506-8127



------------------------------

Message: 4
Date: Tue, 11 Sep 2007 18:42:32 -0700 (PDT)
From: Roger Pe?a <orkcu@xxxxxxxxx>
Subject:  RHEL4.5, GFS and selinux, are they playing
	nice?
To: RedHat Cluster Suit <Linux-cluster@xxxxxxxxxx>
Message-ID: <724236.51256.qm@xxxxxxxxxxxxxxxxxxxxxxxxxxx>
Content-Type: text/plain; charset=iso-8859-1

Hello everybody ;-)

I keep working in making a web cluster play nice after
the upgrade from RHEL4.4 -> RHEL4.5 
with this upgrade, the relation httpd-selinux become
more strict, my first problem came when the RHGFS4.4
do not support xattr (our web content is in a gfs
filesystem) so I must update RHGFS and RHCS to 4.5
(from centos recompilation)

so now I have support to xattr in ours GFS filesystems
but, here is the problem:
the httpd do not want to start because some config
files (witch reside in another GFS filesystem) have a
forbidden context (httpd can not read file with that
context) (those files are included from the main
apache configuration)
even if I change the context and ls -Z show me that I
change the context for every parent and final dir in
the GFS filesystem.
here are the error from selinux:
{ search } for  pid=2289 comm="httpd" name="/"
dev=dm-7 ino=25  
scontext=root:system_r:httpd_t
tcontext=system_u:object_r:nfs_t  
tclass=dir

as you can see, selinux is dening access to httpd
process to make a search in / (root of the filesystem
in device dm-7), with inode 25 and that inode is a
directory, it deny access because the context of that
directory is system_u:object_r:nfs_t 
 am I right?

but, that directory is /opt/soft:
ll -di /opt/soft/
25 drwxr-xr-x  8 root root 3864 Sep 11  2007
/opt/soft/
^^ <--- this is the inode

and it context is system_u:object_r:httpd_config_t:
ll -dZ /opt/soft/
drwxr-xr-x  root     root    
system_u:object_r:httpd_config_t /opt/soft/

so, who is wrong? ls -Z or "global selinux kernel
module" ?
because ls -Z show that the context of that directory
is system_u:object_r:httpd_config_t

if I set selinux to be in permissive mode, then apache
can start, of course, but with some complains like
this:

Sep 11 14:18:08 blade26 kernel:
audit(1189534688.151:38): avc:  denied  { search } for
 pid=2333 comm="httpd" name="/" dev=dm-7 ino=25  
scontext=root:system_r:httpd_t
tcontext=system_u:object_r:nfs_t  tclass=dir

Sep 11 14:18:08 blade26 kernel:
audit(1189534688.155:39): avc:  denied  { getattr }
for  pid=2333 comm="httpd" name="apache" dev=dm-7
ino=31  
scontext=root:system_r:httpd_t
tcontext=system_u:object_r:nfs_t  tclass=dir

Sep 11 14:18:08 blade26 kernel:
audit(1189534688.155:40): avc:  denied  { read } for 
pid=2333 comm="httpd" name="apache" dev=dm-7 ino=31  
scontext=root:system_r:httpd_t
tcontext=system_u:object_r:nfs_t  tclass=dir

Sep 11 14:18:08 blade26 kernel:
audit(1189534688.158:41): avc:  denied  { getattr }
for  pid=2333 comm="httpd" name="httpd.conf" dev=dm-7 

ino=484983 scontext=root:system_r:httpd_t  
tcontext=system_u:object_r:nfs_t tclass=file

Sep 11 14:18:08 blade26 kernel:
audit(1189534688.158:42): avc:  denied  { read } for 
pid=2333 comm="httpd" name="httpd.conf" dev=dm-7  
ino=484983 scontext=root:system_r:httpd_t  
tcontext=system_u:object_r:nfs_t tclass=file

this mean:
access deny to do 
1- search in /opt/soft
2- getattr and read directory /opt/soft/conf/apache
3- getattr and read file httpd.conf

but:
all this files or directory has context 
system_u:object_r:httpd_config_t 

ll -dZ /opt/soft/conf/apache/
drwxr-xr-x  root root system_u:object_r:httpd_config_t
 
/opt/soft/conf/apache/

ll -di /opt/soft/conf/apache/
31 drwxr-xr-x  2 root root 3864 Sep 11 09:44
/opt/soft/conf/apache/


is this related to the fact that selinux policy stated
this:
genfscon gfs /                 system_u:object_r:nfs_t

what do you recomment to solve this complains of
selinux?
mount the gfs filesystem with the option fscontext ?

but that filesystem has other stuff, not related with
apache, so, what context should I use?


thanks
roger


__________________________________________
RedHat Certified ( RHCE )
Cisco Certified ( CCNA & CCDA )


 
____________________________________________________________________________
________
Don't let your dream ride pass you by. Make it a reality with Yahoo! Autos.
http://autos.yahoo.com/index.html
 




------------------------------

Message: 5
Date: Wed, 12 Sep 2007 07:05:43 +0200
From: Alain Richard <alain.richard@xxxxxxxxxxx>
Subject: Re:  RE: qdisk votes not in cman
To: linux clustering <linux-cluster@xxxxxxxxxx>
Message-ID: <CA0AA44E-8956-4826-8083-3FD0976D3D58@xxxxxxxxxxx>
Content-Type: text/plain; charset="iso-8859-1"


Le 4 sept. 07 ` 23:13, Lon Hohberger a icrit :

> On Fri, Aug 31, 2007 at 12:46:50PM +0200, Alain RICHARD wrote:
>> Perhaps a better error reporting is needed in qdiskd to shows that we
>> have hit this problem. Also using a generic name like "qdisk device"
>> when qdiskd is registering its node to cman is a better approach.
>
> What about using the label instead of the device name, and restricting
> the label to 16 chars when advertising to cman?
>
> -- Lon

Because when using multipath devices (for example a two paths  
device), all the paths and the multi-path device are recognized as  
having the same label, so qdisk fails to get the good device (the  
multi-path device).

Regards,

-- 
Alain RICHARD <mailto:alain.richard@xxxxxxxxxxx>
EQUATION SA <http://www.equation.fr/>
Tel : +33 477 79 48 00     Fax : +33 477 79 48 01
Applications client/serveur, inginierie riseau et Linux

-------------- next part --------------
An HTML attachment was scrubbed...
URL:
https://www.redhat.com/archives/linux-cluster/attachments/20070912/6510728f/
attachment.html

------------------------------

Message: 6
Date: Wed, 12 Sep 2007 09:14:04 +0200
From: Jordi Prats <jprats@xxxxxxxx>
Subject:  Services timeout
To: linux-cluster@xxxxxxxxxx
Message-ID: <46E791BC.2090006@xxxxxxxx>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Hi,
I have a NFS server with RedHat Cluster. Sometimes when is on heavy load 
it sets the service status to failed. There's no fs corruption and no 
daemon is down. I suspect this is caused by some timeout while is 
checking the fs is mounted. There is any way to define the check 
interval or the check timeout?

Thank you!
Jordi

-- 
......................................................................
         __
        / /          Jordi Prats
  C E / S / C A      Dept. de Sistemes
      /_/            Centre de Supercomputacis de Catalunya

  Gran Capit`, 2-4 (Edifici Nexus) 7 08034 Barcelona
  T. 93 205 6464 7 F.  93 205 6979 7 jprats@xxxxxxxx
...................................................................... 



------------------------------

Message: 7
Date: Wed, 12 Sep 2007 12:45:41 +0100
From: Patrick Caulfield <pcaulfie@xxxxxxxxxx>
Subject: Re:  DLM - Lock Value Block error
To: linux clustering <linux-cluster@xxxxxxxxxx>
Message-ID: <46E7D165.4040301@xxxxxxxxxx>
Content-Type: text/plain; charset=ISO-8859-1

Christos Triantafillou wrote:
> Hi,
>  
> I am using RHEL 4.5 and DLM 1.0.3 on a 4-node cluster.
>  
> I noticed the following regarding the LVB:
> 1. there are two processes: one that sets the LVB of a resource while
> holding an EX lock
> and another one that has a NL lock on the same resource and is blocked
> on a dlm_lock_wait
> for getting a CR lock and reading the LVB.
> 2. when the first process is interrupted with control-C or killed, the
> second process gets
> an invalid LVB error.
> 
> It seems that DLM falsely releases the resource after the first process
> is gone and then
> the second process reads an uninitialized LVB.
>  
> Can you please confirm this error and create a bug report if necessary?

I've just run the program on VMS and it exhibits exactly the same behaviour.

Therefore I suspect this is not a bug ;-)

-- 
Patrick




------------------------------

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

End of Linux-cluster Digest, Vol 41, Issue 14
*********************************************


DISCLAIMER
==========
This e-mail may contain privileged and confidential information which is the property of Persistent Systems Pvt. Ltd. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Persistent Systems Pvt. Ltd. does not accept any liability for virus infected mails.

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux