Re: conga bug or my mistake?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Monday 18 August 2008 16:05, Grisha G. wrote:
> Show us the logs from your nodes

All right, here comes output from /var/lib/luci/log/event.log when i am 
accessing Cluster -> select my cluster (httpcluster) -> Shared Fence Devices 
link:

Cluster: httpcluster
Agent type: Global Network Block Device
Name: gnbd_from_shds
Nodes using this device for fencing: 192.168.113.3

and now, hitting 192.168.113.3 link, i'll get the following:

[root@rhclm ~]# tail -f /var/lib/luci/log/event.log
[some old output omitted]
...
2008-08-18T16:25:07 ERROR Zope.SiteErrorLog 
https://192.168.113.8:8084/luci/cluster/index_html
Traceback (innermost last):
  Module ZPublisher.Publish, line 115, in publish
  Module ZPublisher.mapply, line 88, in mapply
  Module ZPublisher.Publish, line 41, in call_object
  Module Shared.DC.Scripts.Bindings, line 311, in __call__
  Module Shared.DC.Scripts.Bindings, line 348, in _bindAndExec
  Module Products.PageTemplates.ZopePageTemplate, line 255, in _exec
  Module Products.PageTemplates.PageTemplate, line 104, in pt_render
   - <ZopePageTemplate at /luci/cluster/index_html>
  Module TAL.TALInterpreter, line 238, in __call__
  Module TAL.TALInterpreter, line 281, in interpret
  Module TAL.TALInterpreter, line 728, in do_defineMacro
  Module TAL.TALInterpreter, line 281, in interpret
  Module TAL.TALInterpreter, line 457, in do_optTag_tal
  Module TAL.TALInterpreter, line 442, in do_optTag
  Module TAL.TALInterpreter, line 437, in no_tag
  Module TAL.TALInterpreter, line 281, in interpret
  Module TAL.TALInterpreter, line 780, in do_defineSlot
  Module TAL.TALInterpreter, line 281, in interpret
  Module TAL.TALInterpreter, line 457, in do_optTag_tal
  Module TAL.TALInterpreter, line 442, in do_optTag
  Module TAL.TALInterpreter, line 437, in no_tag
  Module TAL.TALInterpreter, line 281, in interpret
  Module TAL.TALInterpreter, line 728, in do_defineMacro
  Module TAL.TALInterpreter, line 281, in interpret
  Module TAL.TALInterpreter, line 749, in do_useMacro
  Module TAL.TALInterpreter, line 281, in interpret
  Module TAL.TALInterpreter, line 457, in do_optTag_tal
  Module TAL.TALInterpreter, line 442, in do_optTag
  Module TAL.TALInterpreter, line 437, in no_tag
  Module TAL.TALInterpreter, line 281, in interpret
  Module TAL.TALInterpreter, line 715, in do_condition
  Module TAL.TALInterpreter, line 281, in interpret
  Module TAL.TALInterpreter, line 457, in do_optTag_tal
  Module TAL.TALInterpreter, line 442, in do_optTag
  Module TAL.TALInterpreter, line 437, in no_tag
  Module TAL.TALInterpreter, line 281, in interpret
  Module TAL.TALInterpreter, line 715, in do_condition
  Module TAL.TALInterpreter, line 281, in interpret
  Module TAL.TALInterpreter, line 457, in do_optTag_tal
  Module TAL.TALInterpreter, line 442, in do_optTag
  Module TAL.TALInterpreter, line 437, in no_tag
  Module TAL.TALInterpreter, line 281, in interpret
  Module TAL.TALInterpreter, line 749, in do_useMacro
  Module TAL.TALInterpreter, line 281, in interpret
  Module TAL.TALInterpreter, line 715, in do_condition
  Module TAL.TALInterpreter, line 281, in interpret
  Module TAL.TALInterpreter, line 457, in do_optTag_tal
  Module TAL.TALInterpreter, line 442, in do_optTag
  Module TAL.TALInterpreter, line 437, in no_tag
  Module TAL.TALInterpreter, line 281, in interpret
  Module TAL.TALInterpreter, line 691, in do_loop_tal
  Module TAL.TALInterpreter, line 281, in interpret
  Module TAL.TALInterpreter, line 457, in do_optTag_tal
  Module TAL.TALInterpreter, line 442, in do_optTag
  Module TAL.TALInterpreter, line 437, in no_tag
  Module TAL.TALInterpreter, line 281, in interpret
  Module TAL.TALInterpreter, line 715, in do_condition
  Module TAL.TALInterpreter, line 281, in interpret
  Module TAL.TALInterpreter, line 457, in do_optTag_tal
  Module TAL.TALInterpreter, line 442, in do_optTag
  Module TAL.TALInterpreter, line 437, in no_tag
  Module TAL.TALInterpreter, line 281, in interpret
  Module TAL.TALInterpreter, line 691, in do_loop_tal
  Module TAL.TALInterpreter, line 281, in interpret
  Module TAL.TALInterpreter, line 457, in do_optTag_tal
  Module TAL.TALInterpreter, line 442, in do_optTag
  Module TAL.TALInterpreter, line 437, in no_tag
  Module TAL.TALInterpreter, line 281, in interpret
  Module TAL.TALInterpreter, line 749, in do_useMacro
  Module TAL.TALInterpreter, line 281, in interpret
  Module TAL.TALInterpreter, line 715, in do_condition
  Module TAL.TALInterpreter, line 281, in interpret
  Module TAL.TALInterpreter, line 715, in do_condition
  Module TAL.TALInterpreter, line 281, in interpret
  Module TAL.TALInterpreter, line 457, in do_optTag_tal
  Module TAL.TALInterpreter, line 442, in do_optTag
  Module TAL.TALInterpreter, line 437, in no_tag
  Module TAL.TALInterpreter, line 281, in interpret
  Module TAL.TALInterpreter, line 735, in do_useMacro
  Module Products.PageTemplates.TALES, line 221, in evaluate
   - URL: /luci/cluster/fence-macros
   - Line 2034, Column 2
   - Expression: standard:'here/fence-macros/macros/fence-instance-form-gnbd'
   - Names:
      {'container': <Folder at /luci/cluster>,
       'context': <Folder at /luci/cluster>,
       'default': <Products.PageTemplates.TALES.Default instance at 
0xb75707ec>,       'here': <Folder at /luci/cluster>,
       'loop': <Products.PageTemplates.TALES.SafeMapping object at 0xdcde8ac>,
       'modules': <Products.PageTemplates.ZRPythonExpr._SecureModuleImporter 
instance at 0xb75123ac>,
       'nothing': None,
       'options': {'args': ()},
       'repeat': <Products.PageTemplates.TALES.SafeMapping object at 
0xdcde8ac>,       'request': <HTTPRequest, 
URL=https://192.168.113.8:8084/luci/cluster/index_html>,
       'root': <Application at >,
       'template': <ZopePageTemplate at /luci/cluster/index_html>,
       'traverse_subpath': [],
       'user': <PropertiedUser 'admin'>}
  Module Products.PageTemplates.Expressions, line 185, in __call__
  Module Products.PageTemplates.Expressions, line 173, in _eval
  Module Products.PageTemplates.Expressions, line 127, in _eval
   - __traceback_info__: here
  Module Products.PageTemplates.Expressions, line 320, in restrictedTraverse
   - __traceback_info__: {'path': ['fence-macros', 'macros', 
'fence-instance-form-gnbd'], 'TraversalRequestNameStack': []}
KeyError: 'fence-instance-form-gnbd'

What other logs you need? In syslog (/var/log/messages) does not appear 
something related to this event!

Regards,
Alx

>
> On Mon, Aug 18, 2008 at 1:27 PM, Alex <linux@xxxxxxxxxxx> wrote:
> > Hello all,
> >
> > My current setup si similar with one described here:
> > http://sources.redhat.com/cluster/gnbd/gnbd_usage.txt
> > excepting the fact that i'm having 3 clients and 3 gnbd servers
> > (exporting block devices using gnbd).
> >
> > our gnbd servers have the following IPs: 192.168.113.6 and 192.168.113.7
> > our gnbd clients have the following IPs: 192.168.113.3 and
> > 192.168.113.4and 192.168.113.5
> >
> > On our management machine (other then above gnbd clients and servers) is
> > running:
> > [root@rhclm ~]# rpm -q luci
> > luci-0.12.0-7.el5.centos.3
> > [root@rhclm ~]#
> >
> > On our gnbd clients is running:
> > [root@rs1 ~]# rpm -q ricci
> > ricci-0.12.0-7.el5.centos.3
> > [root@rs1 ~]#
> >
> > Now, i'm trying to do the following operations using conga:
> > Cluster -> Shared Fence Devices -> Add Fence Device
> >
> > added successfully:
> >
> > Fence Type: GNBD
> > Name: gnbd_from_shds
> > Servers: 192.168.113.6 192.168.113.7
> >
> > This will add in our cluster.conf:
> > <fencedevices>
> >        <fencedevice agent="fence_gnbd" name="gnbd_from_shds"
> > servers="192.168.113.6 192.168.113.7"/>
> > </fencedevices>
> >
> > Let's try to use it: Cluster -> Nodes hit on 192.168.113.3 and select
> > option
> > "Manage Fencing for this Node" -> "Main Fencing Method" -> "Add a fence
> > device to this level" -> select gnbd_from_shds ->  and hit "Update main
> > fence
> > properties"
> >
> > Is not working, all the time i'm getting a javascript window error saying
> > the
> > following:
> >
> > [snip]
> > The following errors were found:
> > An unknown device type was given: "gnbd."
> > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> > [end snip]
> >
> > You see, is a dot after "gnbd" which i suppose it causing that error.
> >
> > How can be fixed this error?
> >
> > Now, I edited manually our cluster.conf as following:
> > <clusternode name="192.168.113.3" nodeid="3" votes="1">
> >        <fence>
> >                <method name="1">
> >                        <device name="gnbd_from_shds"
> > nodename="192.168.113.3"/>
> >                </method>
> >        </fence>
> > </clusternode>
> >
> > First Question: In docs, i cannot find any explanation about name="value"
> > in
> > <method> tag. As you see, value is "1": <method name="1">. Is this value
> > valid only inside of <clusternode> section or has global semnification in
> > cluster.conf? Can i name it for example "one" or
> > "first_fence_method_for_this_node"?
> >
> > and run:
> > [root@rs1 ~]# ccs_tool update /etc/cluster/cluster.conf
> > Config file updated from version 28 to 29
> >
> > Update complete.
> > [root@rs1 ~]#
> >
> > Now, i can see using conga in "Shared Fence Devices" section:
> >
> > Shared Fence Devices for Cluster: httpcluster
> > Agent type: Global Network Block Device
> > Name: gnbd_from_shds
> > Nodes using this device for fencing: 192.168.113.3
> >
> > but, if i'm hitting 192.168.113.3 link, i'll get other error:
> >
> > Site error
> >
> > This site encountered an error trying to fulfill your request. The errors
> > were:
> >
> > Error Type
> >    KeyError
> > Error Value
> >    'fence-instance-form-gnbd'
> > Request made at
> >    2008/08/18 12:42:45.164 GMT+3
> >
> > Any ideas how to fix it? Is my mistake or is a bug in conga?
> >
> > Second Question: Is correct to add and use for the rest of our client
> > nodes below sintax?
> >
> > For: 192.168.113.4 and 192.168.113.5 client nodes:
> >
> > <clusternode name="192.168.113.4" nodeid="2" votes="1">
> >        <fence>
> >                <method name="1">
> >                        <device name="gnbd_from_shds"
> > nodename="192.168.113.4"/>
> >                </method>
> >        </fence>
> > </clusternode>
> >
> > and
> >
> > <clusternode name="192.168.113.5" nodeid="1" votes="1">
> >        <fence>
> >                <method name="1">
> >                        <device name="gnbd_from_shds"
> > nodename="192.168.113.5"/>
> >                </method>
> >        </fence>
> > </clusternode>
> >
> > For conformity, i am posting below my present cluster.conf file:
> >
> > <?xml version="1.0"?>
> > <cluster alias="httpcluster" config_version="29" name="httpcluster">
> >        <fence_daemon clean_start="0" post_fail_delay="0"
> > post_join_delay="3"/>
> >        <clusternodes>
> >                <clusternode name="192.168.113.5" nodeid="1" votes="1">
> >                        <fence/>
> >                </clusternode>
> >                <clusternode name="192.168.113.4" nodeid="2" votes="1">
> >                        <fence/>
> >                </clusternode>
> >                <clusternode name="192.168.113.3" nodeid="3" votes="1">
> >                        <fence>
> >                                <method name="1">
> >                                        <device name="gnbd_from_shds"
> > nodename="192.168.113.3"/>
> >                                </method>
> >                        </fence>
> >                </clusternode>
> >                <clusternode name="192.168.113.6" nodeid="4" votes="1">
> >                        <fence/>
> >                </clusternode>
> >                <clusternode name="192.168.113.7" nodeid="5" votes="1">
> >                        <fence/>
> >                </clusternode>
> >        </clusternodes>
> >        <cman/>
> >        <fencedevices>
> >                <fencedevice agent="fence_gnbd" name="gnbd_from_shds"
> > servers="192.168.113.6 192.168.113.7"/>
> >        </fencedevices>
> >        <rm>
> >                <failoverdomains/>
> >                <resources/>
> >        </rm>
> > </cluster>
> >
> > Regards,
> > Alx
> >
> > --
> > Linux-cluster mailing list
> > Linux-cluster@xxxxxxxxxx
> > https://www.redhat.com/mailman/listinfo/linux-cluster

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux