Netmask bit set to 32 in service ip

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I have a question which may sounds stupid.
why the netmask for the service ip is set to 32 while the IP belongs to a network with different netmask?

Regards,
Pavlos


: bond0: <BROADCAST,MULTICAST,MASTER,UP> mtu 1500 qdisc noqueue
    link/ether 00:19:bb:3b:6c:b1 brd ff:ff:ff:ff:ff:ff
    inet 10.10.21.133/26 brd 10.10.21.191 scope global bond0
    inet 10.10.21.138/32 scope global bond0 <<================= service ip
3: bond1: <BROADCAST,MULTICAST,MASTER,UP> mtu 1500 qdisc noqueue
    link/ether 00:1a:4b:ff:9c:e8 brd ff:ff:ff:ff:ff:ff
    inet 10.10.21.69/27 brd 10.10.21.95 scope global bond1
    inet 10.10.21.71/32 scope global bond1 <<================= service ip
4: bond2: <BROADCAST,MULTICAST,MASTER,UP> mtu 1500 qdisc noqueue
    link/ether 00:1a:4b:ff:9c:ea brd ff:ff:ff:ff:ff:ff
    inet 10.10.21.228/27 brd 10.10.21.255 scope global bond2
5: eth0: <BROADCAST,MULTICAST,SLAVE,UP> mtu 1500 qdisc pfifo_fast master bond0 qlen 1000
    link/ether 00:19:bb:3b:6c:b1 brd ff:ff:ff:ff:ff:ff
6: eth1: <BROADCAST,MULTICAST,SLAVE,UP> mtu 1500 qdisc pfifo_fast master bond0 qlen 1000
    link/ether 00:19:bb:3b:6c:b1 brd ff:ff:ff:ff:ff:ff
7: eth2: <BROADCAST,MULTICAST,SLAVE,UP> mtu 1500 qdisc pfifo_fast master bond1 qlen 1000
    link/ether 00:1a:4b:ff:9c:e8 brd ff:ff:ff:ff:ff:ff
8: eth3: <BROADCAST,MULTICAST,SLAVE,UP> mtu 1500 qdisc pfifo_fast master bond1 qlen 1000
    link/ether 00:1a:4b:ff:9c:e8 brd ff:ff:ff:ff:ff:ff
9: eth4: <BROADCAST,MULTICAST,SLAVE,UP> mtu 1500 qdisc pfifo_fast master bond2 qlen 1000
    link/ether 00:1a:4b:ff:9c:ea brd ff:ff:ff:ff:ff:ff
10: eth5: <BROADCAST,MULTICAST,SLAVE,UP> mtu 1500 qdisc pfifo_fast master bond2 qlen 1000
    link/ether 00:1a:4b:ff:9c:ea brd ff:ff:ff:ff:ff:ff
11: eth6: <BROADCAST,MULTICAST> mtu 1500 qdisc noop qlen 1000
    link/ether 00:19:bb:3b:18:38 brd ff:ff:ff:ff:ff:ff
12: eth7: <BROADCAST,MULTICAST> mtu 1500 qdisc noop qlen 1000
    link/ether 00:19:bb:3b:18:3a brd ff:ff:ff:ff:ff:ff
ocsi2# grep ' 10.10.21.138' /etc/cluster/cluster.conf
ocsi2# grep '10.10.21.138' /etc/cluster/cluster.conf
      <ip monitor_link="1" address="10.10.21.138"/>



ocsi2# cat /etc/cluster/cluster.conf
<?xml version="1.0" encoding="UTF-8"?>
<cluster config_version="52" name="NGP-Cluster">
  <clusternodes>
    <clusternode votes="1" name="ocsi1-cluster">
      <fence>
        <method name="hardware">
          <device hostname="ocsi1-ilo" name="ilo"/>
        </method>
        <method name="last_resort">
          <device ipaddr="ocsi1-cluster" name="last_resort"/>
        </method>
      </fence>
    </clusternode>
    <clusternode votes="1" name="ocsi2-cluster">
      <fence>
        <method name="hardware">
          <device hostname="ocsi2-ilo" name="ilo"/>
        </method>
        <method name="last_resort">
          <device ipaddr="ocsi2-cluster" name="last_resort"/>
        </method>
      </fence>
    </clusternode>
    <clusternode votes="1" name="ocsi3-cluster">
      <fence>
        <method name="hardware">
          <device hostname="ocsi3-ilo" name="ilo"/>
        </method>
        <method name="last_resort">
          <device ipaddr="ocsi3-cluster" name="last_resort"/>
        </method>
      </fence>
    </clusternode>
  </clusternodes>
  <fencedevices>
<fencedevice passwd="admin123" action="off" login="admin" name="ilo" agent="fence_ilo"/>
    <fencedevice name="last_resort" agent="fence_manual"/>
  </fencedevices>
  <rm log_facility="local3" log_level="4">
    <failoverdomains>
      <failoverdomain restricted="1" ordered="1" name="FirstDomain">
        <failoverdomainnode priority="0" name="ocsi2-cluster"/>
        <failoverdomainnode priority="1" name="ocsi1-cluster"/>
      </failoverdomain>
      <failoverdomain restricted="1" ordered="1" name="SecondDomain">
        <failoverdomainnode priority="0" name="ocsi3-cluster"/>
        <failoverdomainnode priority="1" name="ocsi1-cluster"/>
      </failoverdomain>
    </failoverdomains>
    <resources/>
<service domain="SecondDomain" name="ppr2" autostart="1" recovery="relocate">
      <script name="ppr2" file="/usr/local/wsb/scripts/rhc_ppr2"/>
      <ip monitor_link="1" address="10.10.21.72"/>
<fs device="/dev/sdd1" mountpoint="/usr/omg_ppr2" fstype="ext3" force_unmount="1" name="/usr/omg_ppr2"/>
      <ip monitor_link="1" address="10.10.21.141"/>
    </service>
<service domain="FirstDomain" name="ppr1" autostart="1" recovery="relocate">
      <script name="ppr1" file="/usr/local/wsb/scripts/rhc_test1"/>
      <ip monitor_link="1" address="10.10.21.71"/>
<fs device="/dev/sdd1" mountpoint="/usr/omg_ppr" fstype="ext3" force_unmount="1" name="/usr/omg_ppr"/>
      <ip monitor_link="1" address="10.10.21.138"/>
    </service>
  </rm>
<quorumd votes="2" log_level="4" tko="10" interval="1" label="priquorum" log_facility="local3" device="/dev/sdc"/>
  <fence_daemon clean_start="1"/>
</cluster>


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux