[+Gluster-users@xxxxxxxxxxx] "Initialization of volume 'management' failed, review your volfile again", glusterd throws this error when the service is started automatically after the reboot. But the service is successfully started later manually by the user. can somebody from gluster-users please help on this? glusterfs version: 3.5.1 Thanks, Kanagaraj ----- Original Message ----- > From: "Punit Dambiwal" <hypunit@xxxxxxxxx> > To: "Kanagaraj" <kmayilsa@xxxxxxxxxx> > Cc: users@xxxxxxxxx > Sent: Tuesday, November 25, 2014 7:24:45 PM > Subject: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server... > > Hi Kanagraj, > > Please check the attached log files....i didn't find any thing special.... > > On Tue, Nov 25, 2014 at 12:12 PM, Kanagaraj <kmayilsa@xxxxxxxxxx> wrote: > > > Do you see any errors in > > /var/log/glusterfs/etc-glusterfs-glusterd.vol.log or vdsm.log when the > > service is trying to start automatically after the reboot? > > > > Thanks, > > Kanagaraj > > > > > > On 11/24/2014 08:13 PM, Punit Dambiwal wrote: > > > > Hi Kanagaraj, > > > > Yes...once i will start the gluster service and then vdsmd ...the host > > can connect to cluster...but the question is why it's not started even it > > has chkconfig enabled... > > > > I have tested it in two host cluster environment...(Centos 6.6 and > > centos 7.0) on both hypervisior cluster..it's failed to reconnect in to > > cluster after reboot.... > > > > In both the environment glusterd enabled for next boot....but it's > > failed with the same error....seems it's bug in either gluster or Ovirt ?? > > > > Please help me to find the workaround here if can not resolve it...as > > without this the Host machine can not connect after reboot....that means > > engine will consider it as down and every time need to manually start the > > gluster service and vdsmd... ?? > > > > Thanks, > > Punit > > > > On Mon, Nov 24, 2014 at 10:20 PM, Kanagaraj <kmayilsa@xxxxxxxxxx> wrote: > > > >> From vdsm.log "error: Connection failed. Please check if gluster daemon > >> is operational." > >> > >> Starting glusterd service should fix this issue. 'service glusterd start' > >> But i am wondering why the glusterd was not started automatically after > >> the reboot. > >> > >> Thanks, > >> Kanagaraj > >> > >> > >> > >> On 11/24/2014 07:18 PM, Punit Dambiwal wrote: > >> > >> Hi Kanagaraj, > >> > >> Please find the attached VDSM logs :- > >> > >> ---------------- > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:17,182::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) > >> Owner.cancelAll requests {} > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:17,182::task::993::Storage.TaskManager.Task::(_decref) > >> Task=`1691d409-9b27-4585-8281-5ec26154367a`::ref 0 aborting False > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:32,393::task::595::Storage.TaskManager.Task::(_updateState) > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from state init -> > >> state preparing > >> Thread-13::INFO::2014-11-24 > >> 21:41:32,393::logUtils::44::dispatcher::(wrapper) Run and protect: > >> repoStats(options=None) > >> Thread-13::INFO::2014-11-24 > >> 21:41:32,393::logUtils::47::dispatcher::(wrapper) Run and protect: > >> repoStats, Return response: {} > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:32,393::task::1191::Storage.TaskManager.Task::(prepare) > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::finished: {} > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:32,394::task::595::Storage.TaskManager.Task::(_updateState) > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from state preparing > >> -> > >> state finished > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:32,394::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) > >> Owner.releaseAll requests {} resources {} > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:32,394::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) > >> Owner.cancelAll requests {} > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:32,394::task::993::Storage.TaskManager.Task::(_decref) > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::ref 0 aborting False > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:41,550::BindingXMLRPC::1132::vds::(wrapper) client > >> [10.10.10.2]::call > >> getCapabilities with () {} > >> Thread-13::DEBUG::2014-11-24 21:41:41,553::utils::738::root::(execCmd) > >> /sbin/ip route show to 0.0.0.0/0 table all (cwd None) > >> Thread-13::DEBUG::2014-11-24 21:41:41,560::utils::758::root::(execCmd) > >> SUCCESS: <err> = ''; <rc> = 0 > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:41,588::caps::728::root::(_getKeyPackages) rpm package > >> ('gluster-swift',) not found > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:41,592::caps::728::root::(_getKeyPackages) rpm package > >> ('gluster-swift-object',) not found > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:41,593::caps::728::root::(_getKeyPackages) rpm package > >> ('gluster-swift-plugin',) not found > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package > >> ('gluster-swift-account',) not found > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package > >> ('gluster-swift-proxy',) not found > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package > >> ('gluster-swift-doc',) not found > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:41,599::caps::728::root::(_getKeyPackages) rpm package > >> ('gluster-swift-container',) not found > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:41,599::caps::728::root::(_getKeyPackages) rpm package > >> ('glusterfs-geo-replication',) not found > >> Thread-13::DEBUG::2014-11-24 21:41:41,600::caps::646::root::(get) > >> VirtioRNG DISABLED: libvirt version 0.10.2-29.el6_5.9 required >= > >> 0.10.2-31 > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:41,603::BindingXMLRPC::1139::vds::(wrapper) return getCapabilities > >> with {'status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory': > >> {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:32151ce183c8'}], > >> 'FC': > >> []}, 'packages2': {'kernel': {'release': '431.el6.x86_64', 'buildtime': > >> 1385061309.0, 'version': '2.6.32'}, 'glusterfs-rdma': {'release': '1.el6', > >> 'buildtime': 1403622628L, 'version': '3.5.1'}, 'glusterfs-fuse': > >> {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}, > >> 'spice-server': {'release': '6.el6_5.2', 'buildtime': 1402324637L, > >> 'version': '0.12.4'}, 'vdsm': {'release': '1.gitdb83943.el6', 'buildtime': > >> 1412784567L, 'version': '4.16.7'}, 'qemu-kvm': {'release': > >> '2.415.el6_5.10', 'buildtime': 1402435700L, 'version': '0.12.1.2'}, > >> 'qemu-img': {'release': '2.415.el6_5.10', 'buildtime': 1402435700L, > >> 'version': '0.12.1.2'}, 'libvirt': {'release': '29.el6_5.9', 'buildtime': > >> 1402404612L, 'version': '0.10.2'}, 'glusterfs': {'release': '1.el6', > >> 'buildtime': 1403622628L, 'version': '3.5.1'}, 'mom': {'release': '2.el6', > >> 'buildtime': 1403794344L, 'version': '0.4.1'}, 'glusterfs-server': > >> {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}}, > >> 'numaNodeDistance': {'1': [20, 10], '0': [10, 20]}, 'cpuModel': 'Intel(R) > >> Xeon(R) CPU X5650 @ 2.67GHz', 'liveMerge': 'false', 'hooks': > >> {}, > >> 'cpuSockets': '2', 'vmTypes': ['kvm'], 'selinux': {'mode': '1'}, > >> 'kdumpStatus': 0, 'supportedProtocols': ['2.2', '2.3'], 'networks': > >> {'ovirtmgmt': {'iface': u'bond0.10', 'addr': '43.252.176.16', 'bridged': > >> False, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', > >> 'bootproto4': 'none', 'netmask': '255.255.255.0', 'ipv4addrs': [' > >> 43.252.176.16/24' <http://43.252.176.16/24%27>], 'interface': > >> u'bond0.10', 'ipv6gateway': '::', 'gateway': '43.25.17.1'}, 'Internal': > >> {'iface': 'Internal', 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': > >> 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': > >> 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': > >> 'no'}, 'bridged': True, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], > >> 'gateway': '', 'bootproto4': 'none', 'netmask': '', 'stp': 'off', > >> 'ipv4addrs': [], 'mtu': '9000', 'ipv6gateway': '::', 'ports': > >> ['bond1.100']}, 'storage': {'iface': u'bond1', 'addr': '10.10.10.6', > >> 'bridged': False, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': > >> '9000', 'bootproto4': 'none', 'netmask': '255.255.255.0', 'ipv4addrs': [' > >> 10.10.10.6/24' <http://10.10.10.6/24%27>], 'interface': u'bond1', > >> 'ipv6gateway': '::', 'gateway': ''}, 'VMNetwork': {'iface': 'VMNetwork', > >> 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '1500', > >> 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', > >> 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'bridged': True, > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'gateway': '', > >> 'bootproto4': > >> 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'mtu': '1500', > >> 'ipv6gateway': '::', 'ports': ['bond0.36']}}, 'bridges': {'Internal': > >> {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '9000', > >> 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', > >> 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'ipv6addrs': > >> ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'netmask': '', 'stp': > >> 'off', 'ipv4addrs': [], 'ipv6gateway': '::', 'gateway': '', 'opts': > >> {'topology_change_detected': '0', 'multicast_last_member_count': '2', > >> 'hash_elasticity': '4', 'multicast_query_response_interval': '999', > >> 'multicast_snooping': '1', 'multicast_startup_query_interval': '3124', > >> 'hello_timer': '31', 'multicast_querier_interval': '25496', 'max_age': > >> '1999', 'hash_max': '512', 'stp_state': '0', 'root_id': > >> '8000.001018cddaac', 'priority': '32768', 'multicast_membership_interval': > >> '25996', 'root_path_cost': '0', 'root_port': '0', 'multicast_querier': > >> '0', > >> 'multicast_startup_query_count': '2', 'hello_time': '199', > >> 'topology_change': '0', 'bridge_id': '8000.001018cddaac', > >> 'topology_change_timer': '0', 'ageing_time': '29995', 'gc_timer': '31', > >> 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', > >> 'multicast_query_interval': '12498', 'multicast_last_member_interval': > >> '99', 'multicast_router': '1', 'forward_delay': '0'}, 'ports': > >> ['bond1.100']}, 'VMNetwork': {'addr': '', 'cfg': {'DEFROUTE': 'no', > >> 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', 'NM_CONTROLLED': 'no', > >> 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'VMNetwork', 'TYPE': > >> 'Bridge', > >> 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': > >> '1500', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'ipv6gateway': '::', > >> 'gateway': '', 'opts': {'topology_change_detected': '0', > >> 'multicast_last_member_count': '2', 'hash_elasticity': '4', > >> 'multicast_query_response_interval': '999', 'multicast_snooping': '1', > >> 'multicast_startup_query_interval': '3124', 'hello_timer': '131', > >> 'multicast_querier_interval': '25496', 'max_age': '1999', 'hash_max': > >> '512', 'stp_state': '0', 'root_id': '8000.60eb6920b46c', 'priority': > >> '32768', 'multicast_membership_interval': '25996', 'root_path_cost': '0', > >> 'root_port': '0', 'multicast_querier': '0', > >> 'multicast_startup_query_count': '2', 'hello_time': '199', > >> 'topology_change': '0', 'bridge_id': '8000.60eb6920b46c', > >> 'topology_change_timer': '0', 'ageing_time': '29995', 'gc_timer': '31', > >> 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', > >> 'multicast_query_interval': '12498', 'multicast_last_member_interval': > >> '99', 'multicast_router': '1', 'forward_delay': '0'}, 'ports': > >> ['bond0.36']}}, 'uuid': '44454C4C-4C00-1057-8053-B7C04F504E31', > >> 'lastClientIface': 'bond1', 'nics': {'eth3': {'permhwaddr': > >> '00:10:18:cd:da:ae', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': > >> 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ae', 'MASTER': 'bond1', > >> 'DEVICE': 'eth3', 'ONBOOT': 'no'}, 'ipv6addrs': [], 'mtu': '9000', > >> 'netmask': '', 'ipv4addrs': [], 'hwaddr': '00:10:18:cd:da:ac', 'speed': > >> 1000}, 'eth2': {'permhwaddr': '00:10:18:cd:da:ac', 'addr': '', 'cfg': > >> {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '9000', 'HWADDR': > >> '00:10:18:cd:da:ac', 'MASTER': 'bond1', 'DEVICE': 'eth2', 'ONBOOT': 'no'}, > >> 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', 'ipv4addrs': [], 'hwaddr': > >> '00:10:18:cd:da:ac', 'speed': 1000}, 'eth1': {'permhwaddr': > >> '60:eb:69:20:b4:6d', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': > >> 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6d', 'MASTER': 'bond0', > >> 'DEVICE': 'eth1', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'mtu': '1500', > >> 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'speed': > >> 1000}, 'eth0': {'permhwaddr': '60:eb:69:20:b4:6c', 'addr': '', 'cfg': > >> {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '1500', 'HWADDR': > >> '60:eb:69:20:b4:6c', 'MASTER': 'bond0', 'DEVICE': 'eth0', 'ONBOOT': > >> 'yes'}, > >> 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], 'hwaddr': > >> '60:eb:69:20:b4:6c', 'speed': 1000}}, 'software_revision': '1', > >> 'clusterLevels': ['3.0', '3.1', '3.2', '3.3', '3.4', '3.5'], 'cpuFlags': > >> u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,popcnt,aes,lahf_lm,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,model_n270', > >> 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:32151ce183c8', > >> 'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1', '3.2', > >> '3.3', > >> '3.4', '3.5'], 'autoNumaBalancing': 2, 'reservedMem': '321', 'bondings': > >> {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': > >> [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': > >> {'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': 'no', 'BONDING_OPTS': > >> 'mode=4 miimon=100', 'DEVICE': 'bond0', 'ONBOOT': 'yes'}, 'ipv6addrs': > >> ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'netmask': '', > >> 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'slaves': ['eth0', > >> 'eth1'], > >> 'opts': {'miimon': '100', 'mode': '4'}}, 'bond1': {'addr': '10.10.10.6', > >> 'cfg': {'DEFROUTE': 'no', 'IPADDR': '10.10.10.6', 'HOTPLUG': 'no', 'MTU': > >> '9000', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': > >> 'none', 'BONDING_OPTS': 'mode=4 miimon=100', 'DEVICE': 'bond1', 'ONBOOT': > >> 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', > >> 'netmask': '255.255.255.0', 'ipv4addrs': ['10.10.10.6/24' > >> <http://10.10.10.6/24%27>], 'hwaddr': '00:10:18:cd:da:ac', 'slaves': > >> ['eth2', 'eth3'], 'opts': {'miimon': '100', 'mode': '4'}}, 'bond2': > >> {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], > >> 'hwaddr': '00:00:00:00:00:00'}, 'bond3': {'addr': '', 'cfg': {}, 'mtu': > >> '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}}, > >> 'software_version': '4.16', 'memSize': '24019', 'cpuSpeed': '2667.000', > >> 'numaNodes': {u'1': {'totalMemory': '12288', 'cpus': [6, 7, 8, 9, 10, 11, > >> 18, 19, 20, 21, 22, 23]}, u'0': {'totalMemory': '12278', 'cpus': [0, 1, 2, > >> 3, 4, 5, 12, 13, 14, 15, 16, 17]}}, 'version_name': 'Snow Man', 'vlans': > >> {'bond0.10': {'iface': 'bond0', 'addr': '43.25.17.16', 'cfg': {'DEFROUTE': > >> 'yes', 'VLAN': 'yes', 'IPADDR': '43.25.17.16', 'HOTPLUG': 'no', 'GATEWAY': > >> '43.25.17.1', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', > >> 'BOOTPROTO': 'none', 'DEVICE': 'bond0.10', 'MTU': '1500', 'ONBOOT': > >> 'yes'}, > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': 10, 'mtu': > >> '1500', > >> 'netmask': '255.255.255.0', 'ipv4addrs': ['43.25.17.16/24'] > >> <http://43.25.17.16/24%27%5D>}, 'bond0.36': {'iface': 'bond0', 'addr': > >> '', 'cfg': {'BRIDGE': 'VMNetwork', 'VLAN': 'yes', 'HOTPLUG': 'no', 'MTU': > >> '1500', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond0.36', 'ONBOOT': 'no'}, > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': 36, 'mtu': > >> '1500', > >> 'netmask': '', 'ipv4addrs': []}, 'bond1.100': {'iface': 'bond1', 'addr': > >> '', 'cfg': {'BRIDGE': 'Internal', 'VLAN': 'yes', 'HOTPLUG': 'no', 'MTU': > >> '9000', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond1.100', 'ONBOOT': 'no'}, > >> 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'vlanid': 100, 'mtu': > >> '9000', > >> 'netmask': '', 'ipv4addrs': []}}, 'cpuCores': '12', 'kvmEnabled': 'true', > >> 'guestOverhead': '65', 'cpuThreads': '24', 'emulatedMachines': > >> [u'rhel6.5.0', u'pc', u'rhel6.4.0', u'rhel6.3.0', u'rhel6.2.0', > >> u'rhel6.1.0', u'rhel6.0.0', u'rhel5.5.0', u'rhel5.4.4', u'rhel5.4.0'], > >> 'operatingSystem': {'release': '5.el6.centos.11.1', 'version': '6', > >> 'name': > >> 'RHEL'}, 'lastClient': '10.10.10.2'}} > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:41,620::BindingXMLRPC::1132::vds::(wrapper) client > >> [10.10.10.2]::call > >> getHardwareInfo with () {} > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:41,621::BindingXMLRPC::1139::vds::(wrapper) return getHardwareInfo > >> with {'status': {'message': 'Done', 'code': 0}, 'info': > >> {'systemProductName': 'CS24-TY', 'systemSerialNumber': '7LWSPN1', > >> 'systemFamily': 'Server', 'systemVersion': 'A00', 'systemUUID': > >> '44454c4c-4c00-1057-8053-b7c04f504e31', 'systemManufacturer': 'Dell'}} > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:41,733::BindingXMLRPC::1132::vds::(wrapper) client > >> [10.10.10.2]::call > >> hostsList with () {} flowID [222e8036] > >> Thread-13::ERROR::2014-11-24 > >> 21:41:44,753::BindingXMLRPC::1148::vds::(wrapper) vdsm exception occured > >> Traceback (most recent call last): > >> File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1135, in wrapper > >> res = f(*args, **kwargs) > >> File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper > >> rv = func(*args, **kwargs) > >> File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList > >> return {'hosts': self.svdsmProxy.glusterPeerStatus()} > >> File "/usr/share/vdsm/supervdsm.py", line 50, in __call__ > >> return callMethod() > >> File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda> > >> **kwargs) > >> File "<string>", line 2, in glusterPeerStatus > >> File "/usr/lib64/python2.6/multiprocessing/managers.py", line 740, in > >> _callmethod > >> raise convert_to_error(kind, result) > >> GlusterCmdExecFailedException: Command execution failed > >> error: Connection failed. Please check if gluster daemon is operational. > >> return code: 1 > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:50,949::task::595::Storage.TaskManager.Task::(_updateState) > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from state init -> > >> state preparing > >> Thread-13::INFO::2014-11-24 > >> 21:41:50,950::logUtils::44::dispatcher::(wrapper) Run and protect: > >> repoStats(options=None) > >> Thread-13::INFO::2014-11-24 > >> 21:41:50,950::logUtils::47::dispatcher::(wrapper) Run and protect: > >> repoStats, Return response: {} > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:50,950::task::1191::Storage.TaskManager.Task::(prepare) > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::finished: {} > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:50,950::task::595::Storage.TaskManager.Task::(_updateState) > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from state preparing > >> -> > >> state finished > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:50,951::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) > >> Owner.releaseAll requests {} resources {} > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:50,951::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) > >> Owner.cancelAll requests {} > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:50,951::task::993::Storage.TaskManager.Task::(_decref) > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::ref 0 aborting False > >> ------------------------------- > >> > >> [root@compute4 ~]# service glusterd status > >> glusterd is stopped > >> [root@compute4 ~]# chkconfig --list | grep glusterd > >> glusterd 0:off 1:off 2:on 3:on 4:on 5:on 6:off > >> [root@compute4 ~]# > >> > >> Thanks, > >> Punit > >> > >> On Mon, Nov 24, 2014 at 6:36 PM, Kanagaraj <kmayilsa@xxxxxxxxxx> wrote: > >> > >>> Can you send the corresponding error in vdsm.log from the host? > >>> > >>> Also check if glusterd service is running. > >>> > >>> Thanks, > >>> Kanagaraj > >>> > >>> > >>> On 11/24/2014 03:39 PM, Punit Dambiwal wrote: > >>> > >>> Hi, > >>> > >>> After reboot my Hypervisior host can not activate again in the cluster > >>> and failed with the following error :- > >>> > >>> Gluster command [<UNKNOWN>] failed on server... > >>> > >>> Engine logs :- > >>> > >>> 2014-11-24 18:05:28,397 INFO > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] > >>> (DefaultQuartzScheduler_Worker-64) START, > >>> GlusterVolumesListVDSCommand(HostName = Compute4, HostId = > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 5f251c90 > >>> 2014-11-24 18:05:30,609 INFO > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] > >>> (DefaultQuartzScheduler_Worker-64) FINISH, GlusterVolumesListVDSCommand, > >>> return: > >>> {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@d95203e0}, > >>> log id: 5f251c90 > >>> 2014-11-24 18:05:33,768 INFO > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] (ajp--127.0.0.1-8702-8) > >>> [287d570d] Lock Acquired to object EngineLock [exclusiveLocks= key: > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS > >>> , sharedLocks= ] > >>> 2014-11-24 18:05:33,795 INFO > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] Running command: > >>> ActivateVdsCommand internal: false. Entities affected : ID: > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDSAction group > >>> MANIPULATE_HOST > >>> with role type ADMIN > >>> 2014-11-24 18:05:33,796 INFO > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] Before acquiring lock in > >>> order to prevent monitoring for host Compute5 from data-center SV_WTC > >>> 2014-11-24 18:05:33,797 INFO > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] Lock acquired, from now a > >>> monitoring of host will be skipped for host Compute5 from data-center > >>> SV_WTC > >>> 2014-11-24 18:05:33,817 INFO > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] START, > >>> SetVdsStatusVDSCommand(HostName = Compute5, HostId = > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=Unassigned, > >>> nonOperationalReason=NONE, stopSpmFailureLogged=false), log id: 1cbc7311 > >>> 2014-11-24 18:05:33,820 INFO > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] FINISH, > >>> SetVdsStatusVDSCommand, log id: 1cbc7311 > >>> 2014-11-24 18:05:34,086 INFO > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] > >>> (org.ovirt.thread.pool-8-thread-45) Activate finished. Lock released. > >>> Monitoring can run now for host Compute5 from data-center SV_WTC > >>> 2014-11-24 18:05:34,088 INFO > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > >>> (org.ovirt.thread.pool-8-thread-45) Correlation ID: 287d570d, Job ID: > >>> 5ef8e4d6-b2bc-469e-8e81-7ef74b2a001a, Call Stack: null, Custom Event ID: > >>> -1, Message: Host Compute5 was activated by admin. > >>> 2014-11-24 18:05:34,090 INFO > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] > >>> (org.ovirt.thread.pool-8-thread-45) Lock freed to object EngineLock > >>> [exclusiveLocks= key: 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS > >>> , sharedLocks= ] > >>> 2014-11-24 18:05:35,792 INFO > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] > >>> (DefaultQuartzScheduler_Worker-55) [3706e836] START, > >>> GlusterVolumesListVDSCommand(HostName = Compute4, HostId = > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 48a0c832 > >>> 2014-11-24 18:05:37,064 INFO > >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] > >>> (DefaultQuartzScheduler_Worker-69) START, > >>> GetHardwareInfoVDSCommand(HostName = Compute5, HostId = > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, > >>> vds=Host[Compute5,0bf6b00f-7947-4411-b55a-cc5eea2b381a]), log id: > >>> 6d560cc2 > >>> 2014-11-24 18:05:37,074 INFO > >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] > >>> (DefaultQuartzScheduler_Worker-69) FINISH, GetHardwareInfoVDSCommand, log > >>> id: 6d560cc2 > >>> 2014-11-24 18:05:37,093 WARN > >>> [org.ovirt.engine.core.vdsbroker.VdsManager] > >>> (DefaultQuartzScheduler_Worker-69) Host Compute5 is running with disabled > >>> SELinux. > >>> 2014-11-24 18:05:37,127 INFO > >>> [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] Running command: > >>> HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entities > >>> affected > >>> : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS > >>> 2014-11-24 18:05:37,147 INFO > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] START, > >>> GlusterServersListVDSCommand(HostName = Compute5, HostId = > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a), log id: 4faed87 > >>> 2014-11-24 18:05:37,164 INFO > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] FINISH, > >>> GlusterServersListVDSCommand, log id: 4faed87 > >>> 2014-11-24 18:05:37,189 INFO > >>> [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Running command: > >>> SetNonOperationalVdsCommand internal: true. Entities affected : ID: > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS > >>> 2014-11-24 18:05:37,206 INFO > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] START, > >>> SetVdsStatusVDSCommand(HostName = Compute5, HostId = > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=NonOperational, > >>> nonOperationalReason=GLUSTER_COMMAND_FAILED, stopSpmFailureLogged=false), > >>> log id: fed5617 > >>> 2014-11-24 18:05:37,209 INFO > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] FINISH, > >>> SetVdsStatusVDSCommand, log id: fed5617 > >>> 2014-11-24 18:05:37,223 ERROR > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: 4a84c4e5, > >>> Job > >>> ID: 4bfd4a6d-c3ef-468f-a40e-a3a6ca13011b, Call Stack: null, Custom Event > >>> ID: -1, Message: Gluster command [<UNKNOWN>] failed on server Compute5. > >>> 2014-11-24 18:05:37,243 INFO > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: null, Call > >>> Stack: null, Custom Event ID: -1, Message: Status of host Compute5 was > >>> set > >>> to NonOperational. > >>> 2014-11-24 18:05:37,272 INFO > >>> [org.ovirt.engine.core.bll.HandleVdsVersionCommand] > >>> (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Running command: > >>> HandleVdsVersionCommand internal: true. Entities affected : ID: > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS > >>> 2014-11-24 18:05:37,274 INFO > >>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] > >>> (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Host > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a : Compute5 is already in > >>> NonOperational status for reason GLUSTER_COMMAND_FAILED. > >>> SetNonOperationalVds command is skipped. > >>> 2014-11-24 18:05:38,065 INFO > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] > >>> (DefaultQuartzScheduler_Worker-55) [3706e836] FINISH, > >>> GlusterVolumesListVDSCommand, return: > >>> {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4e72a1b1}, > >>> log id: 48a0c832 > >>> 2014-11-24 18:05:43,243 INFO > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] > >>> (DefaultQuartzScheduler_Worker-35) START, > >>> GlusterVolumesListVDSCommand(HostName = Compute4, HostId = > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 3ce13ebc > >>> ^C > >>> [root@ccr01 ~]# > >>> > >>> Thanks, > >>> Punit > >>> > >>> > >>> _______________________________________________ > >>> Users mailing > >>> listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users > >>> > >>> > >>> > >> > >> > > > > >
[2014-11-25 06:46:08.125425] I [glusterfsd.c:1959:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.5.1 (/usr/sbin/glusterd --pid-file=/var/run/glusterd.pid) [2014-11-25 06:46:08.322938] I [glusterd.c:1122:init] 0-management: Using /var/lib/glusterd as working directory [2014-11-25 06:46:08.419650] I [socket.c:3561:socket_init] 0-socket.management: SSL support is NOT enabled [2014-11-25 06:46:08.419680] I [socket.c:3576:socket_init] 0-socket.management: using system polling thread [2014-11-25 06:46:08.458021] W [rdma.c:4194:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel creation failed (No such device) [2014-11-25 06:46:08.458038] E [rdma.c:4482:init] 0-rdma.management: Failed to initialize IB Device [2014-11-25 06:46:08.458045] E [rpc-transport.c:333:rpc_transport_load] 0-rpc-transport: 'rdma' initialization failed [2014-11-25 06:46:08.458082] W [rpcsvc.c:1521:rpcsvc_transport_create] 0-rpc-service: cannot create listener, initing the transport failed [2014-11-25 06:46:08.458140] I [socket.c:3561:socket_init] 0-socket.management: SSL support is NOT enabled [2014-11-25 06:46:08.458150] I [socket.c:3576:socket_init] 0-socket.management: using system polling thread [2014-11-25 06:46:08.470489] I [glusterd.c:367:glusterd_check_gsync_present] 0-glusterd: geo-replication module not installed in the system [2014-11-25 06:46:08.645088] I [glusterd-store.c:1421:glusterd_restore_op_version] 0-glusterd: retrieved op-version: 30501 [2014-11-25 06:46:08.754276] E [glusterd-store.c:1979:glusterd_store_retrieve_volume] 0-: Unknown key: brick-0 [2014-11-25 06:46:08.754303] E [glusterd-store.c:1979:glusterd_store_retrieve_volume] 0-: Unknown key: brick-1 [2014-11-25 06:46:08.754314] E [glusterd-store.c:1979:glusterd_store_retrieve_volume] 0-: Unknown key: brick-2 [2014-11-25 06:46:08.754324] E [glusterd-store.c:1979:glusterd_store_retrieve_volume] 0-: Unknown key: brick-3 [2014-11-25 06:46:08.754334] E [glusterd-store.c:1979:glusterd_store_retrieve_volume] 0-: Unknown key: brick-4 [2014-11-25 06:46:08.754343] E [glusterd-store.c:1979:glusterd_store_retrieve_volume] 0-: Unknown key: brick-5 [2014-11-25 06:46:08.754353] E [glusterd-store.c:1979:glusterd_store_retrieve_volume] 0-: Unknown key: brick-6 [2014-11-25 06:46:08.754362] E [glusterd-store.c:1979:glusterd_store_retrieve_volume] 0-: Unknown key: brick-7 [2014-11-25 06:46:09.771118] I [glusterd-handler.c:2912:glusterd_friend_add] 0-management: connect returned 0 [2014-11-25 06:46:09.821174] I [glusterd-handler.c:2912:glusterd_friend_add] 0-management: connect returned 0 [2014-11-25 06:46:09.871256] I [glusterd-handler.c:2912:glusterd_friend_add] 0-management: connect returned 0 [2014-11-25 06:46:09.921321] I [glusterd-handler.c:2912:glusterd_friend_add] 0-management: connect returned 0 [2014-11-25 06:46:09.921385] I [rpc-clnt.c:972:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2014-11-25 06:46:09.921441] I [socket.c:3561:socket_init] 0-management: SSL support is NOT enabled [2014-11-25 06:46:09.921450] I [socket.c:3576:socket_init] 0-management: using system polling thread [2014-11-25 06:46:09.961630] I [rpc-clnt.c:972:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2014-11-25 06:46:09.961670] I [socket.c:3561:socket_init] 0-management: SSL support is NOT enabled [2014-11-25 06:46:09.961679] I [socket.c:3576:socket_init] 0-management: using system polling thread [2014-11-25 06:46:09.964325] I [rpc-clnt.c:972:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2014-11-25 06:46:09.964361] I [socket.c:3561:socket_init] 0-management: SSL support is NOT enabled [2014-11-25 06:46:09.964370] I [socket.c:3576:socket_init] 0-management: using system polling thread [2014-11-25 06:46:09.966951] I [rpc-clnt.c:972:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2014-11-25 06:46:09.966989] I [socket.c:3561:socket_init] 0-management: SSL support is NOT enabled [2014-11-25 06:46:09.966998] I [socket.c:3576:socket_init] 0-management: using system polling thread [2014-11-25 06:46:09.972113] E [glusterd-store.c:2632:glusterd_resolve_all_bricks] 0-glusterd: resolve brick failed in restore [2014-11-25 06:46:09.972138] E [xlator.c:403:xlator_init] 0-management: Initialization of volume 'management' failed, review your volfile again [2014-11-25 06:46:09.972147] E [graph.c:307:glusterfs_graph_init] 0-management: initializing translator failed [2014-11-25 06:46:09.972153] E [graph.c:502:glusterfs_graph_activate] 0-graph: init failed [2014-11-25 06:46:09.989082] W [glusterfsd.c:1095:cleanup_and_exit] (-->/usr/sbin/glusterd(main+0x54e) [0x4075ce] (-->/usr/sbin/glusterd(glusterfs_volumes_init+0xc0) [0x405240] (-->/usr/sbin/glusterd(glusterfs_process_volfp+0x106) [0x405146]))) 0-: received signum (0), shutting down
After Munaual reboot the gluster service :- [2014-11-25 06:54:11.630034] I [glusterfsd.c:1959:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.5.1 (/usr/sbin/glusterd --pid-file=/var/run/glusterd.pid) [2014-11-25 06:54:11.635944] I [glusterd.c:1122:init] 0-management: Using /var/lib/glusterd as working directory [2014-11-25 06:54:11.638156] I [socket.c:3561:socket_init] 0-socket.management: SSL support is NOT enabled [2014-11-25 06:54:11.638177] I [socket.c:3576:socket_init] 0-socket.management: using system polling thread [2014-11-25 06:54:11.641459] W [rdma.c:4194:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel creation failed (No such device) [2014-11-25 06:54:11.641479] E [rdma.c:4482:init] 0-rdma.management: Failed to initialize IB Device [2014-11-25 06:54:11.641501] E [rpc-transport.c:333:rpc_transport_load] 0-rpc-transport: 'rdma' initialization failed [2014-11-25 06:54:11.641555] W [rpcsvc.c:1521:rpcsvc_transport_create] 0-rpc-service: cannot create listener, initing the transport failed [2014-11-25 06:54:11.641626] I [socket.c:3561:socket_init] 0-socket.management: SSL support is NOT enabled [2014-11-25 06:54:11.641641] I [socket.c:3576:socket_init] 0-socket.management: using system polling thread [2014-11-25 06:54:11.648334] I [glusterd.c:367:glusterd_check_gsync_present] 0-glusterd: geo-replication module not installed in the system [2014-11-25 06:54:11.798836] I [glusterd-store.c:1421:glusterd_restore_op_version] 0-glusterd: retrieved op-version: 30501 [2014-11-25 06:54:11.849045] E [glusterd-store.c:1979:glusterd_store_retrieve_volume] 0-: Unknown key: brick-0 [2014-11-25 06:54:11.849073] E [glusterd-store.c:1979:glusterd_store_retrieve_volume] 0-: Unknown key: brick-1 [2014-11-25 06:54:11.849086] E [glusterd-store.c:1979:glusterd_store_retrieve_volume] 0-: Unknown key: brick-2 [2014-11-25 06:54:11.849096] E [glusterd-store.c:1979:glusterd_store_retrieve_volume] 0-: Unknown key: brick-3 [2014-11-25 06:54:11.849106] E [glusterd-store.c:1979:glusterd_store_retrieve_volume] 0-: Unknown key: brick-4 [2014-11-25 06:54:11.849117] E [glusterd-store.c:1979:glusterd_store_retrieve_volume] 0-: Unknown key: brick-5 [2014-11-25 06:54:11.849128] E [glusterd-store.c:1979:glusterd_store_retrieve_volume] 0-: Unknown key: brick-6 [2014-11-25 06:54:11.849139] E [glusterd-store.c:1979:glusterd_store_retrieve_volume] 0-: Unknown key: brick-7 [2014-11-25 06:54:12.783026] I [glusterd-handler.c:2912:glusterd_friend_add] 0-management: connect returned 0 [2014-11-25 06:54:12.833169] I [glusterd-handler.c:2912:glusterd_friend_add] 0-management: connect returned 0 [2014-11-25 06:54:12.883216] I [glusterd-handler.c:2912:glusterd_friend_add] 0-management: connect returned 0 [2014-11-25 06:54:12.933303] I [glusterd-handler.c:2912:glusterd_friend_add] 0-management: connect returned 0 [2014-11-25 06:54:12.933372] I [rpc-clnt.c:972:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2014-11-25 06:54:12.933437] I [socket.c:3561:socket_init] 0-management: SSL support is NOT enabled [2014-11-25 06:54:12.933451] I [socket.c:3576:socket_init] 0-management: using system polling thread [2014-11-25 06:54:12.941439] I [rpc-clnt.c:972:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2014-11-25 06:54:12.941482] I [socket.c:3561:socket_init] 0-management: SSL support is NOT enabled [2014-11-25 06:54:12.941497] I [socket.c:3576:socket_init] 0-management: using system polling thread [2014-11-25 06:54:12.944146] I [rpc-clnt.c:972:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2014-11-25 06:54:12.944187] I [socket.c:3561:socket_init] 0-management: SSL support is NOT enabled [2014-11-25 06:54:12.944197] I [socket.c:3576:socket_init] 0-management: using system polling thread [2014-11-25 06:54:12.946780] I [rpc-clnt.c:972:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2014-11-25 06:54:12.946824] I [socket.c:3561:socket_init] 0-management: SSL support is NOT enabled [2014-11-25 06:54:12.946834] I [socket.c:3576:socket_init] 0-management: using system polling thread [2014-11-25 06:54:12.949846] I [glusterd.c:138:glusterd_uuid_init] 0-management: retrieved UUID: 918cef1f-ed83-4570-90ad-39c5ceec753c Final graph: +------------------------------------------------------------------------------+ 1: volume management 2: type mgmt/glusterd 3: option rpc-auth.auth-glusterfs on 4: option rpc-auth.auth-unix on 5: option rpc-auth.auth-null on 6: option transport.socket.listen-backlog 128 7: option transport.socket.read-fail-log off 8: option transport.socket.keepalive-interval 2 9: option transport.socket.keepalive-time 10 10: option transport-type rdma 11: option working-directory /var/lib/glusterd 12: end-volume 13: +------------------------------------------------------------------------------+ [2014-11-25 06:54:13.450703] I [glusterd-rpc-ops.c:356:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 0d94d896-e994-4deb-b1a1-d6f68be86c30, host: compute3.3linux.com, port: 0 [2014-11-25 06:54:13.810377] I [rpc-clnt.c:972:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2014-11-25 06:54:13.810446] I [socket.c:3561:socket_init] 0-management: SSL support is NOT enabled [2014-11-25 06:54:13.810462] I [socket.c:3576:socket_init] 0-management: using system polling thread [2014-11-25 06:54:13.812527] I [rpc-clnt.c:972:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2014-11-25 06:54:13.812617] I [socket.c:3561:socket_init] 0-management: SSL support is NOT enabled [2014-11-25 06:54:13.812634] I [socket.c:3576:socket_init] 0-management: using system polling thread [2014-11-25 06:54:13.818535] I [rpc-clnt.c:972:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2014-11-25 06:54:13.818613] I [socket.c:3561:socket_init] 0-management: SSL support is NOT enabled [2014-11-25 06:54:13.818627] I [socket.c:3576:socket_init] 0-management: using system polling thread [2014-11-25 06:54:14.761152] I [glusterd-handshake.c:563:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 30501 [2014-11-25 06:54:15.041594] I [rpc-clnt.c:972:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2014-11-25 06:54:15.041687] I [socket.c:3561:socket_init] 0-management: SSL support is NOT enabled [2014-11-25 06:54:15.041702] I [socket.c:3576:socket_init] 0-management: using system polling thread [2014-11-25 06:54:20.824599] I [rpc-clnt.c:972:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2014-11-25 06:54:20.824696] I [socket.c:3561:socket_init] 0-management: SSL support is NOT enabled [2014-11-25 06:54:20.824717] I [socket.c:3576:socket_init] 0-management: using system polling thread [2014-11-25 06:54:21.240569] I [glusterd-rpc-ops.c:356:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 69b19f4e-03c8-4fe1-a644-2701c41d7020, host: compute1.3linux.com, port: 0 [2014-11-25 06:54:21.574630] I [glusterd-handler.c:2212:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 69b19f4e-03c8-4fe1-a644-2701c41d7020 [2014-11-25 06:54:21.574675] I [glusterd-handler.c:2257:__glusterd_handle_friend_update] 0-: Received uuid: 0d94d896-e994-4deb-b1a1-d6f68be86c30, hostname:compute3.3linux.com [2014-11-25 06:54:21.574695] I [glusterd-handler.c:2257:__glusterd_handle_friend_update] 0-: Received uuid: 2cf3428c-62be-42f4-86d3-ef67e7dfc97f, hostname:compute2.3linux.com [2014-11-25 06:54:21.574710] I [glusterd-handler.c:2257:__glusterd_handle_friend_update] 0-: Received uuid: 918cef1f-ed83-4570-90ad-39c5ceec753c, hostname:compute4.3linux.com [2014-11-25 06:54:21.574725] I [glusterd-handler.c:2266:__glusterd_handle_friend_update] 0-: Received my uuid as Friend [2014-11-25 06:54:21.574734] I [glusterd-handler.c:2257:__glusterd_handle_friend_update] 0-: Received uuid: 268139cb-e7b7-48e9-99cf-e26c523516e7, hostname:compute5.3linux.com [2014-11-25 06:54:21.574788] I [glusterd-rpc-ops.c:356:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 2cf3428c-62be-42f4-86d3-ef67e7dfc97f, host: compute2.3linux.com, port: 0 [2014-11-25 06:54:21.901130] I [glusterd-rpc-ops.c:356:__glusterd_friend_add_cbk] 0-glusterd: Received ACC from uuid: 268139cb-e7b7-48e9-99cf-e26c523516e7, host: compute5.3linux.com, port: 0 [2014-11-25 06:54:22.226551] I [socket.c:2238:socket_event_handler] 0-transport: disconnecting now [2014-11-25 06:54:22.226889] I [glusterd-handler.c:3713:__glusterd_brick_rpc_notify] 0-management: Disconnected from compute4.3linux.com:/brick7 [2014-11-25 06:54:22.227216] I [socket.c:2238:socket_event_handler] 0-transport: disconnecting now [2014-11-25 06:54:22.227541] I [glusterd-handler.c:3713:__glusterd_brick_rpc_notify] 0-management: Disconnected from compute4.3linux.com:/brick8 [2014-11-25 06:54:22.227566] I [socket.c:2238:socket_event_handler] 0-transport: disconnecting now [2014-11-25 06:54:22.227593] I [socket.c:2238:socket_event_handler] 0-transport: disconnecting now [2014-11-25 06:54:22.227664] I [glusterd-handler.c:2050:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 69b19f4e-03c8-4fe1-a644-2701c41d7020 [2014-11-25 06:54:22.227734] I [glusterd-handler.c:3085:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to compute1.3linux.com (0), ret: 0 [2014-11-25 06:54:22.551176] I [glusterd-sm.c:495:glusterd_ac_send_friend_update] 0-: Added uuid: 268139cb-e7b7-48e9-99cf-e26c523516e7, host: compute5.3linux.com [2014-11-25 06:54:22.551211] I [glusterd-sm.c:495:glusterd_ac_send_friend_update] 0-: Added uuid: 69b19f4e-03c8-4fe1-a644-2701c41d7020, host: compute1.3linux.com [2014-11-25 06:54:22.551229] I [glusterd-sm.c:495:glusterd_ac_send_friend_update] 0-: Added uuid: 2cf3428c-62be-42f4-86d3-ef67e7dfc97f, host: compute2.3linux.com [2014-11-25 06:54:22.551245] I [glusterd-sm.c:495:glusterd_ac_send_friend_update] 0-: Added uuid: 0d94d896-e994-4deb-b1a1-d6f68be86c30, host: compute3.3linux.com [2014-11-25 06:54:22.893624] I [glusterd-rpc-ops.c:553:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 268139cb-e7b7-48e9-99cf-e26c523516e7 [2014-11-25 06:54:22.893659] I [glusterd-rpc-ops.c:553:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 69b19f4e-03c8-4fe1-a644-2701c41d7020 [2014-11-25 06:54:22.893678] I [glusterd-rpc-ops.c:553:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 2cf3428c-62be-42f4-86d3-ef67e7dfc97f [2014-11-25 06:54:22.893695] I [glusterd-rpc-ops.c:553:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 0d94d896-e994-4deb-b1a1-d6f68be86c30 [2014-11-25 06:54:22.893834] I [glusterd-handler.c:2212:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 0d94d896-e994-4deb-b1a1-d6f68be86c30 [2014-11-25 06:54:22.893861] I [glusterd-handler.c:2257:__glusterd_handle_friend_update] 0-: Received uuid: 2cf3428c-62be-42f4-86d3-ef67e7dfc97f, hostname:compute2.3linux.com [2014-11-25 06:54:22.893873] I [glusterd-handler.c:2257:__glusterd_handle_friend_update] 0-: Received uuid: 268139cb-e7b7-48e9-99cf-e26c523516e7, hostname:compute5.3linux.com [2014-11-25 06:54:22.893882] I [glusterd-handler.c:2257:__glusterd_handle_friend_update] 0-: Received uuid: 918cef1f-ed83-4570-90ad-39c5ceec753c, hostname:compute4.3linux.com [2014-11-25 06:54:22.893889] I [glusterd-handler.c:2266:__glusterd_handle_friend_update] 0-: Received my uuid as Friend [2014-11-25 06:54:22.893898] I [glusterd-handler.c:2257:__glusterd_handle_friend_update] 0-: Received uuid: 69b19f4e-03c8-4fe1-a644-2701c41d7020, hostname:compute1.3linux.com [2014-11-25 06:54:22.918527] I [glusterd-handler.c:2212:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 2cf3428c-62be-42f4-86d3-ef67e7dfc97f [2014-11-25 06:54:22.918554] I [glusterd-handler.c:2257:__glusterd_handle_friend_update] 0-: Received uuid: 268139cb-e7b7-48e9-99cf-e26c523516e7, hostname:compute5.3linux.com [2014-11-25 06:54:22.918566] I [glusterd-handler.c:2257:__glusterd_handle_friend_update] 0-: Received uuid: 918cef1f-ed83-4570-90ad-39c5ceec753c, hostname:compute4.3linux.com [2014-11-25 06:54:22.918573] I [glusterd-handler.c:2266:__glusterd_handle_friend_update] 0-: Received my uuid as Friend [2014-11-25 06:54:22.918582] I [glusterd-handler.c:2257:__glusterd_handle_friend_update] 0-: Received uuid: 0d94d896-e994-4deb-b1a1-d6f68be86c30, hostname:compute3.3linux.com [2014-11-25 06:54:22.918591] I [glusterd-handler.c:2257:__glusterd_handle_friend_update] 0-: Received uuid: 69b19f4e-03c8-4fe1-a644-2701c41d7020, hostname:compute1.3linux.com [2014-11-25 06:54:22.918717] I [glusterd-handshake.c:563:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 30501 [2014-11-25 06:54:23.353506] I [glusterd-handshake.c:563:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 30501 [2014-11-25 06:54:23.744838] I [glusterd-handler.c:2212:__glusterd_handle_friend_update] 0-glusterd: Received friend update from uuid: 268139cb-e7b7-48e9-99cf-e26c523516e7 [2014-11-25 06:54:23.744889] I [glusterd-handler.c:2257:__glusterd_handle_friend_update] 0-: Received uuid: 918cef1f-ed83-4570-90ad-39c5ceec753c, hostname:compute4.3linux.com [2014-11-25 06:54:23.744905] I [glusterd-handler.c:2266:__glusterd_handle_friend_update] 0-: Received my uuid as Friend [2014-11-25 06:54:23.744920] I [glusterd-handler.c:2257:__glusterd_handle_friend_update] 0-: Received uuid: 0d94d896-e994-4deb-b1a1-d6f68be86c30, hostname:compute3.3linux.com [2014-11-25 06:54:23.744936] I [glusterd-handler.c:2257:__glusterd_handle_friend_update] 0-: Received uuid: 2cf3428c-62be-42f4-86d3-ef67e7dfc97f, hostname:compute2.3linux.com [2014-11-25 06:54:23.744951] I [glusterd-handler.c:2257:__glusterd_handle_friend_update] 0-: Received uuid: 69b19f4e-03c8-4fe1-a644-2701c41d7020, hostname:compute1.3linux.com [2014-11-25 06:54:23.745067] I [glusterd-handler.c:2050:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 0d94d896-e994-4deb-b1a1-d6f68be86c30 [2014-11-25 06:54:23.745136] I [glusterd-handler.c:3085:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to compute3.3linux.com (0), ret: 0 [2014-11-25 06:54:24.437530] I [glusterd-sm.c:495:glusterd_ac_send_friend_update] 0-: Added uuid: 268139cb-e7b7-48e9-99cf-e26c523516e7, host: compute5.3linux.com [2014-11-25 06:54:24.437565] I [glusterd-sm.c:495:glusterd_ac_send_friend_update] 0-: Added uuid: 69b19f4e-03c8-4fe1-a644-2701c41d7020, host: compute1.3linux.com [2014-11-25 06:54:24.437579] I [glusterd-sm.c:495:glusterd_ac_send_friend_update] 0-: Added uuid: 2cf3428c-62be-42f4-86d3-ef67e7dfc97f, host: compute2.3linux.com [2014-11-25 06:54:24.437589] I [glusterd-sm.c:495:glusterd_ac_send_friend_update] 0-: Added uuid: 0d94d896-e994-4deb-b1a1-d6f68be86c30, host: compute3.3linux.com [2014-11-25 06:54:24.871837] I [glusterd-handler.c:2050:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 2cf3428c-62be-42f4-86d3-ef67e7dfc97f [2014-11-25 06:54:24.871929] I [glusterd-handler.c:3085:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to compute2.3linux.com (0), ret: 0 [2014-11-25 06:54:25.239612] I [glusterd-sm.c:495:glusterd_ac_send_friend_update] 0-: Added uuid: 268139cb-e7b7-48e9-99cf-e26c523516e7, host: compute5.3linux.com [2014-11-25 06:54:25.239636] I [glusterd-sm.c:495:glusterd_ac_send_friend_update] 0-: Added uuid: 69b19f4e-03c8-4fe1-a644-2701c41d7020, host: compute1.3linux.com [2014-11-25 06:54:25.239647] I [glusterd-sm.c:495:glusterd_ac_send_friend_update] 0-: Added uuid: 2cf3428c-62be-42f4-86d3-ef67e7dfc97f, host: compute2.3linux.com [2014-11-25 06:54:25.239658] I [glusterd-sm.c:495:glusterd_ac_send_friend_update] 0-: Added uuid: 0d94d896-e994-4deb-b1a1-d6f68be86c30, host: compute3.3linux.com [2014-11-25 06:54:25.565279] I [glusterd-pmap.c:227:pmap_registry_bind] 0-pmap: adding brick /brick8 on port 49153 [2014-11-25 06:54:25.569930] I [glusterd-pmap.c:227:pmap_registry_bind] 0-pmap: adding brick /brick7 on port 49152 [2014-11-25 06:54:25.572193] I [glusterd-rpc-ops.c:553:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 2cf3428c-62be-42f4-86d3-ef67e7dfc97f [2014-11-25 06:54:25.572226] I [glusterd-rpc-ops.c:553:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 0d94d896-e994-4deb-b1a1-d6f68be86c30 [2014-11-25 06:54:25.572244] I [glusterd-rpc-ops.c:553:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 268139cb-e7b7-48e9-99cf-e26c523516e7 [2014-11-25 06:54:25.572260] I [glusterd-rpc-ops.c:553:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 69b19f4e-03c8-4fe1-a644-2701c41d7020 [2014-11-25 06:54:25.572299] I [glusterd-handshake.c:563:__glusterd_mgmt_hndsk_versions_ack] 0-management: using the op-version 30501 [2014-11-25 06:54:25.807473] I [glusterd-rpc-ops.c:553:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 2cf3428c-62be-42f4-86d3-ef67e7dfc97f [2014-11-25 06:54:25.807508] I [glusterd-rpc-ops.c:553:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 0d94d896-e994-4deb-b1a1-d6f68be86c30 [2014-11-25 06:54:25.807532] I [glusterd-rpc-ops.c:553:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 268139cb-e7b7-48e9-99cf-e26c523516e7 [2014-11-25 06:54:25.807549] I [glusterd-rpc-ops.c:553:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 69b19f4e-03c8-4fe1-a644-2701c41d7020 [2014-11-25 06:54:25.808430] I [glusterd-handler.c:2050:__glusterd_handle_incoming_friend_req] 0-glusterd: Received probe from uuid: 268139cb-e7b7-48e9-99cf-e26c523516e7 [2014-11-25 06:54:25.808526] I [glusterd-handler.c:3085:glusterd_xfer_friend_add_resp] 0-glusterd: Responded to compute5.3linux.com (0), ret: 0 [2014-11-25 06:54:26.266730] I [glusterd-sm.c:495:glusterd_ac_send_friend_update] 0-: Added uuid: 268139cb-e7b7-48e9-99cf-e26c523516e7, host: compute5.3linux.com [2014-11-25 06:54:26.266757] I [glusterd-sm.c:495:glusterd_ac_send_friend_update] 0-: Added uuid: 69b19f4e-03c8-4fe1-a644-2701c41d7020, host: compute1.3linux.com [2014-11-25 06:54:26.266775] I [glusterd-sm.c:495:glusterd_ac_send_friend_update] 0-: Added uuid: 2cf3428c-62be-42f4-86d3-ef67e7dfc97f, host: compute2.3linux.com [2014-11-25 06:54:26.266785] I [glusterd-sm.c:495:glusterd_ac_send_friend_update] 0-: Added uuid: 0d94d896-e994-4deb-b1a1-d6f68be86c30, host: compute3.3linux.com [2014-11-25 06:54:26.592503] I [glusterd-rpc-ops.c:553:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 268139cb-e7b7-48e9-99cf-e26c523516e7 [2014-11-25 06:54:26.592532] I [glusterd-rpc-ops.c:553:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 69b19f4e-03c8-4fe1-a644-2701c41d7020 [2014-11-25 06:54:26.592550] I [glusterd-rpc-ops.c:553:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 2cf3428c-62be-42f4-86d3-ef67e7dfc97f [2014-11-25 06:54:26.592565] I [glusterd-rpc-ops.c:553:__glusterd_friend_update_cbk] 0-management: Received ACC from uuid: 0d94d896-e994-4deb-b1a1-d6f68be86c30 ^C
Thread-13::DEBUG::2014-11-25 14:55:06,359::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-13::DEBUG::2014-11-25 14:55:06,359::task::993::Storage.TaskManager.Task::(_decref) Task=`24ef8c12-ef59-4ef4-9a67-561a3dbd189a`::ref 0 aborting False Thread-13::DEBUG::2014-11-25 14:55:21,575::task::595::Storage.TaskManager.Task::(_updateState) Task=`07cec52f-6f3b-46ec-957e-d105353168b6`::moving from state init -> state preparing Thread-13::INFO::2014-11-25 14:55:21,575::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-13::INFO::2014-11-25 14:55:21,576::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-13::DEBUG::2014-11-25 14:55:21,576::task::1191::Storage.TaskManager.Task::(prepare) Task=`07cec52f-6f3b-46ec-957e-d105353168b6`::finished: {} Thread-13::DEBUG::2014-11-25 14:55:21,576::task::595::Storage.TaskManager.Task::(_updateState) Task=`07cec52f-6f3b-46ec-957e-d105353168b6`::moving from state preparing -> state finished Thread-13::DEBUG::2014-11-25 14:55:21,576::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-13::DEBUG::2014-11-25 14:55:21,576::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-13::DEBUG::2014-11-25 14:55:21,576::task::993::Storage.TaskManager.Task::(_decref) Task=`07cec52f-6f3b-46ec-957e-d105353168b6`::ref 0 aborting False Thread-13::DEBUG::2014-11-25 14:55:36,788::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call getCapabilities with () {} flowID [5279acc6] Thread-13::DEBUG::2014-11-25 14:55:36,791::utils::738::root::(execCmd) /sbin/ip route show to 0.0.0.0/0 table all (cwd None) Thread-13::DEBUG::2014-11-25 14:55:36,798::utils::758::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-13::DEBUG::2014-11-25 14:55:36,827::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift',) not found Thread-13::DEBUG::2014-11-25 14:55:36,831::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-object',) not found Thread-13::DEBUG::2014-11-25 14:55:36,831::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-plugin',) not found Thread-13::DEBUG::2014-11-25 14:55:36,836::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-account',) not found Thread-13::DEBUG::2014-11-25 14:55:36,837::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-proxy',) not found Thread-13::DEBUG::2014-11-25 14:55:36,837::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-doc',) not found Thread-13::DEBUG::2014-11-25 14:55:36,837::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-container',) not found Thread-13::DEBUG::2014-11-25 14:55:36,838::caps::728::root::(_getKeyPackages) rpm package ('glusterfs-geo-replication',) not found Thread-13::DEBUG::2014-11-25 14:55:36,839::caps::646::root::(get) VirtioRNG DISABLED: libvirt version 0.10.2-29.el6_5.9 required >= 0.10.2-31 Thread-13::DEBUG::2014-11-25 14:55:36,842::BindingXMLRPC::1139::vds::(wrapper) return getCapabilities with {'status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory': {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:32151ce183c8'}], 'FC': []}, 'packages2': {'kernel': {'release': '431.el6.x86_64', 'buildtime': 1385061309.0, 'version': '2.6.32'}, 'glusterfs-rdma': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}, 'glusterfs-fuse': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}, 'spice-server': {'release': '6.el6_5.2', 'buildtime': 1402324637L, 'version': '0.12.4'}, 'vdsm': {'release': '1.gitdb83943.el6', 'buildtime': 1412784567L, 'version': '4.16.7'}, 'qemu-kvm': {'release': '2.415.el6_5.10', 'buildtime': 1402435700L, 'version': '0.12.1.2'}, 'qemu-img': {'release': '2.415.el6_5.10', 'buildtime': 1402435700L, 'version': '0.12.1.2'}, 'libvirt': {'release': '29.el6_5.9', 'buildtime': 1402404612L, 'version': '0.10.2'}, 'glusterfs': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}, 'mom': {'release': '2.el6', 'buildtime': 1403794344L, 'version': '0.4.1'}, 'glusterfs-server': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}}, 'numaNodeDistance': {'1': [20, 10], '0': [10, 20]}, 'cpuModel': 'Intel(R) Xeon(R) CPU X5650 @ 2.67GHz', 'liveMerge': 'false', 'hooks': {}, 'cpuSockets': '2', 'vmTypes': ['kvm'], 'selinux': {'mode': '1'}, 'kdumpStatus': 0, 'supportedProtocols': ['2.2', '2.3'], 'networks': {'ovirtmgmt': {'iface': u'bond0.10', 'addr': '43.252.176.16', 'bridged': False, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'bootproto4': 'none', 'netmask': '255.255.255.0', 'ipv4addrs': ['43.252.176.16/24'], 'interface': u'bond0.10', 'ipv6gateway': '::', 'gateway': '43.252.176.1'}, 'Internal': {'iface': 'Internal', 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'bridged': True, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'gateway': '', 'bootproto4': 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'mtu': '9000', 'ipv6gateway': '::', 'ports': ['bond1.100']}, 'storage': {'iface': u'bond1', 'addr': '10.10.10.6', 'bridged': False, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'bootproto4': 'none', 'netmask': '255.255.255.0', 'ipv4addrs': ['10.10.10.6/24'], 'interface': u'bond1', 'ipv6gateway': '::', 'gateway': ''}, 'VMNetwork': {'iface': 'VMNetwork', 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'bridged': True, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'gateway': '', 'bootproto4': 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'mtu': '1500', 'ipv6gateway': '::', 'ports': ['bond0.36']}}, 'bridges': {'Internal': {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'ipv6gateway': '::', 'gateway': '', 'opts': {'topology_change_detected': '0', 'multicast_last_member_count': '2', 'hash_elasticity': '4', 'multicast_query_response_interval': '999', 'multicast_snooping': '1', 'multicast_startup_query_interval': '3124', 'hello_timer': '100', 'multicast_querier_interval': '25496', 'max_age': '1999', 'hash_max': '512', 'stp_state': '0', 'root_id': '8000.001018cddaac', 'priority': '32768', 'multicast_membership_interval': '25996', 'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0', 'multicast_startup_query_count': '2', 'hello_time': '199', 'topology_change': '0', 'bridge_id': '8000.001018cddaac', 'topology_change_timer': '0', 'ageing_time': '29995', 'gc_timer': '0', 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', 'multicast_query_interval': '12498', 'multicast_last_member_interval': '99', 'multicast_router': '1', 'forward_delay': '0'}, 'ports': ['bond1.100']}, 'VMNetwork': {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'ipv6gateway': '::', 'gateway': '', 'opts': {'topology_change_detected': '0', 'multicast_last_member_count': '2', 'hash_elasticity': '4', 'multicast_query_response_interval': '999', 'multicast_snooping': '1', 'multicast_startup_query_interval': '3124', 'hello_timer': '198', 'multicast_querier_interval': '25496', 'max_age': '1999', 'hash_max': '512', 'stp_state': '0', 'root_id': '8000.60eb6920b46c', 'priority': '32768', 'multicast_membership_interval': '25996', 'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0', 'multicast_startup_query_count': '2', 'hello_time': '199', 'topology_change': '0', 'bridge_id': '8000.60eb6920b46c', 'topology_change_timer': '0', 'ageing_time': '29995', 'gc_timer': '98', 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', 'multicast_query_interval': '12498', 'multicast_last_member_interval': '99', 'multicast_router': '1', 'forward_delay': '0'}, 'ports': ['bond0.36']}}, 'uuid': '44454C4C-4C00-1057-8053-B7C04F504E31', 'lastClientIface': 'bond1', 'nics': {'eth3': {'permhwaddr': '00:10:18:cd:da:ae', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ae', 'MASTER': 'bond1', 'DEVICE': 'eth3', 'ONBOOT': 'no'}, 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '00:10:18:cd:da:ac', 'speed': 1000}, 'eth2': {'permhwaddr': '00:10:18:cd:da:ac', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ac', 'MASTER': 'bond1', 'DEVICE': 'eth2', 'ONBOOT': 'no'}, 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '00:10:18:cd:da:ac', 'speed': 1000}, 'eth1': {'permhwaddr': '60:eb:69:20:b4:6d', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6d', 'MASTER': 'bond0', 'DEVICE': 'eth1', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'speed': 1000}, 'eth0': {'permhwaddr': '60:eb:69:20:b4:6c', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6c', 'MASTER': 'bond0', 'DEVICE': 'eth0', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'speed': 1000}}, 'software_revision': '1', 'clusterLevels': ['3.0', '3.1', '3.2', '3.3', '3.4', '3.5'], 'cpuFlags': u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,popcnt,aes,lahf_lm,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,model_n270', 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:32151ce183c8', 'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1', '3.2', '3.3', '3.4', '3.5'], 'autoNumaBalancing': 2, 'reservedMem': '321', 'bondings': {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': {'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': 'no', 'BONDING_OPTS': 'mode=4 miimon=100', 'DEVICE': 'bond0', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'slaves': ['eth0', 'eth1'], 'opts': {'miimon': '100', 'mode': '4'}}, 'bond1': {'addr': '10.10.10.6', 'cfg': {'DEFROUTE': 'no', 'IPADDR': '10.10.10.6', 'HOTPLUG': 'no', 'MTU': '9000', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'BONDING_OPTS': 'mode=4 miimon=100', 'DEVICE': 'bond1', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'netmask': '255.255.255.0', 'ipv4addrs': ['10.10.10.6/24'], 'hwaddr': '00:10:18:cd:da:ac', 'slaves': ['eth2', 'eth3'], 'opts': {'miimon': '100', 'mode': '4'}}, 'bond2': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond3': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}}, 'software_version': '4.16', 'memSize': '24019', 'cpuSpeed': '2667.000', 'numaNodes': {u'1': {'totalMemory': '12288', 'cpus': [6, 7, 8, 9, 10, 11, 18, 19, 20, 21, 22, 23]}, u'0': {'totalMemory': '12278', 'cpus': [0, 1, 2, 3, 4, 5, 12, 13, 14, 15, 16, 17]}}, 'version_name': 'Snow Man', 'vlans': {'bond0.10': {'iface': 'bond0', 'addr': '43.252.176.16', 'cfg': {'DEFROUTE': 'yes', 'VLAN': 'yes', 'IPADDR': '43.252.176.16', 'HOTPLUG': 'no', 'GATEWAY': '43.252.176.1', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'DEVICE': 'bond0.10', 'MTU': '1500', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': 10, 'mtu': '1500', 'netmask': '255.255.255.0', 'ipv4addrs': ['43.252.176.16/24']}, 'bond0.36': {'iface': 'bond0', 'addr': '', 'cfg': {'BRIDGE': 'VMNetwork', 'VLAN': 'yes', 'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond0.36', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': 36, 'mtu': '1500', 'netmask': '', 'ipv4addrs': []}, 'bond1.100': {'iface': 'bond1', 'addr': '', 'cfg': {'BRIDGE': 'Internal', 'VLAN': 'yes', 'HOTPLUG': 'no', 'MTU': '9000', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond1.100', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'vlanid': 100, 'mtu': '9000', 'netmask': '', 'ipv4addrs': []}}, 'cpuCores': '12', 'kvmEnabled': 'true', 'guestOverhead': '65', 'cpuThreads': '24', 'emulatedMachines': [u'rhel6.5.0', u'pc', u'rhel6.4.0', u'rhel6.3.0', u'rhel6.2.0', u'rhel6.1.0', u'rhel6.0.0', u'rhel5.5.0', u'rhel5.4.4', u'rhel5.4.0'], 'operatingSystem': {'release': '5.el6.centos.11.1', 'version': '6', 'name': 'RHEL'}, 'lastClient': '10.10.10.2'}} Thread-13::DEBUG::2014-11-25 14:55:36,860::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call getHardwareInfo with () {} flowID [5279acc6] Thread-13::DEBUG::2014-11-25 14:55:36,867::BindingXMLRPC::1139::vds::(wrapper) return getHardwareInfo with {'status': {'message': 'Done', 'code': 0}, 'info': {'systemProductName': 'CS24-TY', 'systemSerialNumber': '7LWSPN1', 'systemFamily': 'Server', 'systemVersion': 'A00', 'systemUUID': '44454c4c-4c00-1057-8053-b7c04f504e31', 'systemManufacturer': 'Dell'}} Thread-13::DEBUG::2014-11-25 14:55:36,968::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call hostsList with () {} flowID [39d35dd7] Thread-13::DEBUG::2014-11-25 14:55:37,022::BindingXMLRPC::1139::vds::(wrapper) return hostsList with {'status': {'message': 'Done', 'code': 0}, 'hosts': [{'status': 'CONNECTED', 'hostname': '10.10.10.6', 'uuid': '918cef1f-ed83-4570-90ad-39c5ceec753c'}, {'status': 'CONNECTED', 'hostname': 'compute5.3linux.com', 'uuid': '268139cb-e7b7-48e9-99cf-e26c523516e7'}, {'status': 'CONNECTED', 'hostname': 'compute1.3linux.com', 'uuid': '69b19f4e-03c8-4fe1-a644-2701c41d7020'}, {'status': 'CONNECTED', 'hostname': 'compute2.3linux.com', 'uuid': '2cf3428c-62be-42f4-86d3-ef67e7dfc97f'}, {'status': 'CONNECTED', 'hostname': 'compute3.3linux.com', 'uuid': '0d94d896-e994-4deb-b1a1-d6f68be86c30'}]} Thread-13::DEBUG::2014-11-25 14:55:37,106::BindingXMLRPC::318::vds::(wrapper) client [10.10.10.2] flowID [5d2c0061] Thread-13::DEBUG::2014-11-25 14:55:37,107::task::595::Storage.TaskManager.Task::(_updateState) Task=`1a590712-cbec-4cdf-b34c-96578c07f282`::moving from state init -> state preparing Thread-13::INFO::2014-11-25 14:55:37,107::logUtils::44::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=1, spUUID='f4966810-0923-4752-a054-2b4954584970', conList=[{'connection': '43.252.176.12:/var/lib/exports/iso', 'iqn': '', 'user': '', 'tpgt': '1', 'password': '******', 'id': 'df63624e-7605-4ee7-8262-f2023461baaa', 'port': ''}], options=None) Thread-13::DEBUG::2014-11-25 14:55:37,109::fileUtils::142::Storage.fileUtils::(createdir) Creating directory: /rhev/data-center/mnt/43.252.176.12:_var_lib_exports_iso Thread-13::WARNING::2014-11-25 14:55:37,110::fileUtils::149::Storage.fileUtils::(createdir) Dir /rhev/data-center/mnt/43.252.176.12:_var_lib_exports_iso already exists Thread-13::DEBUG::2014-11-25 14:55:37,110::mount::227::Storage.Misc.excCmd::(_runcmd) /usr/bin/sudo -n /bin/mount -t nfs -o soft,nosharecache,timeo=600,retrans=6,nfsvers=3 43.252.176.12:/var/lib/exports/iso /rhev/data-center/mnt/43.252.176.12:_var_lib_exports_iso (cwd None) Thread-13::DEBUG::2014-11-25 14:55:37,376::__init__::159::IOProcessClient::(_run) Starting IOProcess... Thread-14::DEBUG::2014-11-25 14:55:37,399::__init__::232::IOProcess::(_processLogs) Opening communication channels... Thread-14::DEBUG::2014-11-25 14:55:37,400::__init__::232::IOProcess::(_processLogs) Receiving request... Thread-14::DEBUG::2014-11-25 14:55:37,400::__init__::232::IOProcess::(_processLogs) Queuing request in the thread pool... Thread-14::DEBUG::2014-11-25 14:55:37,400::__init__::232::IOProcess::(_processLogs) Extracting request information... Thread-14::DEBUG::2014-11-25 14:55:37,400::__init__::232::IOProcess::(_processLogs) (1) Got request for method 'access' Thread-14::DEBUG::2014-11-25 14:55:37,402::__init__::232::IOProcess::(_processLogs) (1) Queuing response Thread-13::DEBUG::2014-11-25 14:55:37,417::hsm::2372::Storage.HSM::(__prefetchDomains) nfs local path: /rhev/data-center/mnt/43.252.176.12:_var_lib_exports_iso Thread-14::DEBUG::2014-11-25 14:55:37,417::__init__::232::IOProcess::(_processLogs) Receiving request... Thread-14::DEBUG::2014-11-25 14:55:37,418::__init__::232::IOProcess::(_processLogs) Queuing request in the thread pool... Thread-14::DEBUG::2014-11-25 14:55:37,418::__init__::232::IOProcess::(_processLogs) Extracting request information... Thread-14::DEBUG::2014-11-25 14:55:37,418::__init__::232::IOProcess::(_processLogs) (2) Got request for method 'glob' Thread-14::DEBUG::2014-11-25 14:55:37,418::__init__::232::IOProcess::(_processLogs) (2) Queuing response Thread-13::DEBUG::2014-11-25 14:55:37,418::hsm::2396::Storage.HSM::(__prefetchDomains) Found SD uuids: (u'9b2fdce9-a795-44a7-86ed-99e514d24399',) Thread-13::DEBUG::2014-11-25 14:55:37,419::hsm::2452::Storage.HSM::(connectStorageServer) knownSDs: {9b2fdce9-a795-44a7-86ed-99e514d24399: storage.nfsSD.findDomain} Thread-13::INFO::2014-11-25 14:55:37,419::logUtils::47::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 0, 'id': 'df63624e-7605-4ee7-8262-f2023461baaa'}]} Thread-13::DEBUG::2014-11-25 14:55:37,419::task::1191::Storage.TaskManager.Task::(prepare) Task=`1a590712-cbec-4cdf-b34c-96578c07f282`::finished: {'statuslist': [{'status': 0, 'id': 'df63624e-7605-4ee7-8262-f2023461baaa'}]} Thread-13::DEBUG::2014-11-25 14:55:37,419::task::595::Storage.TaskManager.Task::(_updateState) Task=`1a590712-cbec-4cdf-b34c-96578c07f282`::moving from state preparing -> state finished Thread-13::DEBUG::2014-11-25 14:55:37,419::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-13::DEBUG::2014-11-25 14:55:37,420::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-13::DEBUG::2014-11-25 14:55:37,420::task::993::Storage.TaskManager.Task::(_decref) Task=`1a590712-cbec-4cdf-b34c-96578c07f282`::ref 0 aborting False Thread-13::DEBUG::2014-11-25 14:55:37,481::BindingXMLRPC::318::vds::(wrapper) client [10.10.10.2] flowID [5d2c0061] Thread-13::DEBUG::2014-11-25 14:55:37,481::task::595::Storage.TaskManager.Task::(_updateState) Task=`1c9c3722-aeef-41da-a695-80939cb4be9f`::moving from state init -> state preparing Thread-13::INFO::2014-11-25 14:55:37,482::logUtils::44::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=7, spUUID='f4966810-0923-4752-a054-2b4954584970', conList=[{'port': '', 'connection': '10.10.10.3:/data1', 'iqn': '', 'user': '', 'tpgt': '1', 'vfs_type': 'glusterfs', 'password': '******', 'id': '5fb19bf8-41a3-4f8d-9177-12d68e963624'}], options=None) Thread-13::DEBUG::2014-11-25 14:55:37,483::fileUtils::142::Storage.fileUtils::(createdir) Creating directory: /rhev/data-center/mnt/glusterSD/10.10.10.3:_data1 Thread-13::WARNING::2014-11-25 14:55:37,516::fileUtils::149::Storage.fileUtils::(createdir) Dir /rhev/data-center/mnt/glusterSD/10.10.10.3:_data1 already exists Thread-13::DEBUG::2014-11-25 14:55:37,516::mount::227::Storage.Misc.excCmd::(_runcmd) /usr/bin/sudo -n /bin/mount -t glusterfs 10.10.10.3:/data1 /rhev/data-center/mnt/glusterSD/10.10.10.3:_data1 (cwd None) Thread-14::DEBUG::2014-11-25 14:55:37,776::__init__::232::IOProcess::(_processLogs) Receiving request... Thread-14::DEBUG::2014-11-25 14:55:37,782::__init__::232::IOProcess::(_processLogs) Queuing request in the thread pool... Thread-14::DEBUG::2014-11-25 14:55:37,782::__init__::232::IOProcess::(_processLogs) Extracting request information... Thread-14::DEBUG::2014-11-25 14:55:37,783::__init__::232::IOProcess::(_processLogs) (3) Got request for method 'access' Thread-14::DEBUG::2014-11-25 14:55:37,783::__init__::232::IOProcess::(_processLogs) (3) Queuing response Thread-13::DEBUG::2014-11-25 14:55:37,792::hsm::2384::Storage.HSM::(__prefetchDomains) glusterDomPath: glusterSD/* Thread-14::DEBUG::2014-11-25 14:55:37,793::__init__::232::IOProcess::(_processLogs) Receiving request... Thread-14::DEBUG::2014-11-25 14:55:37,794::__init__::232::IOProcess::(_processLogs) Queuing request in the thread pool... Thread-14::DEBUG::2014-11-25 14:55:37,794::__init__::232::IOProcess::(_processLogs) Extracting request information... Thread-14::DEBUG::2014-11-25 14:55:37,794::__init__::232::IOProcess::(_processLogs) (4) Got request for method 'glob' Thread-14::DEBUG::2014-11-25 14:55:37,808::__init__::232::IOProcess::(_processLogs) (4) Queuing response Thread-13::DEBUG::2014-11-25 14:55:37,809::hsm::2396::Storage.HSM::(__prefetchDomains) Found SD uuids: (u'ea499cc0-684f-4a2f-abef-43ff28d649cb',) Thread-13::DEBUG::2014-11-25 14:55:37,809::hsm::2452::Storage.HSM::(connectStorageServer) knownSDs: {9b2fdce9-a795-44a7-86ed-99e514d24399: storage.nfsSD.findDomain, ea499cc0-684f-4a2f-abef-43ff28d649cb: storage.glusterSD.findDomain} Thread-13::INFO::2014-11-25 14:55:37,809::logUtils::47::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response: {'statuslist': [{'status': 0, 'id': '5fb19bf8-41a3-4f8d-9177-12d68e963624'}]} Thread-13::DEBUG::2014-11-25 14:55:37,810::task::1191::Storage.TaskManager.Task::(prepare) Task=`1c9c3722-aeef-41da-a695-80939cb4be9f`::finished: {'statuslist': [{'status': 0, 'id': '5fb19bf8-41a3-4f8d-9177-12d68e963624'}]} Thread-13::DEBUG::2014-11-25 14:55:37,810::task::595::Storage.TaskManager.Task::(_updateState) Task=`1c9c3722-aeef-41da-a695-80939cb4be9f`::moving from state preparing -> state finished Thread-13::DEBUG::2014-11-25 14:55:37,810::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-13::DEBUG::2014-11-25 14:55:37,810::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-13::DEBUG::2014-11-25 14:55:37,810::task::993::Storage.TaskManager.Task::(_decref) Task=`1c9c3722-aeef-41da-a695-80939cb4be9f`::ref 0 aborting False Thread-13::DEBUG::2014-11-25 14:55:37,876::BindingXMLRPC::318::vds::(wrapper) client [10.10.10.2] Thread-13::DEBUG::2014-11-25 14:55:37,876::task::595::Storage.TaskManager.Task::(_updateState) Task=`0bbd8080-ab4c-49a2-9708-49e6e5bd0b78`::moving from state init -> state preparing Thread-13::INFO::2014-11-25 14:55:37,877::logUtils::44::dispatcher::(wrapper) Run and protect: connectStoragePool(spUUID='f4966810-0923-4752-a054-2b4954584970', hostID=1, msdUUID='ea499cc0-684f-4a2f-abef-43ff28d649cb', masterVersion=1, domainsMap={'9b2fdce9-a795-44a7-86ed-99e514d24399': 'active', 'ea499cc0-684f-4a2f-abef-43ff28d649cb': 'active'}, options=None) Thread-13::DEBUG::2014-11-25 14:55:37,877::resourceManager::198::Storage.ResourceManager.Request::(__init__) ResName=`Storage.HsmDomainMonitorLock`ReqID=`bf59d66b-9c39-45c7-9009-497bffabb81e`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '1011' at 'connectStoragePool' Thread-13::DEBUG::2014-11-25 14:55:37,877::resourceManager::542::Storage.ResourceManager::(registerResource) Trying to register resource 'Storage.HsmDomainMonitorLock' for lock type 'exclusive' Thread-13::DEBUG::2014-11-25 14:55:37,877::resourceManager::601::Storage.ResourceManager::(registerResource) Resource 'Storage.HsmDomainMonitorLock' is free. Now locking as 'exclusive' (1 active user) Thread-13::DEBUG::2014-11-25 14:55:37,878::resourceManager::238::Storage.ResourceManager.Request::(grant) ResName=`Storage.HsmDomainMonitorLock`ReqID=`bf59d66b-9c39-45c7-9009-497bffabb81e`::Granted request Thread-13::DEBUG::2014-11-25 14:55:37,878::resourceManager::198::Storage.ResourceManager.Request::(__init__) ResName=`Storage.f4966810-0923-4752-a054-2b4954584970`ReqID=`dee61c09-c6aa-4bae-aeba-8f628da900c5`::Request was made in '/usr/share/vdsm/storage/hsm.py' line '1059' at '_connectStoragePool' Thread-13::DEBUG::2014-11-25 14:55:37,878::resourceManager::542::Storage.ResourceManager::(registerResource) Trying to register resource 'Storage.f4966810-0923-4752-a054-2b4954584970' for lock type 'exclusive' Thread-13::DEBUG::2014-11-25 14:55:37,878::resourceManager::601::Storage.ResourceManager::(registerResource) Resource 'Storage.f4966810-0923-4752-a054-2b4954584970' is free. Now locking as 'exclusive' (1 active user) Thread-13::DEBUG::2014-11-25 14:55:37,879::resourceManager::238::Storage.ResourceManager.Request::(grant) ResName=`Storage.f4966810-0923-4752-a054-2b4954584970`ReqID=`dee61c09-c6aa-4bae-aeba-8f628da900c5`::Granted request Thread-13::DEBUG::2014-11-25 14:55:37,880::spbackends::485::Storage.StoragePoolMemoryBackend::(updateVersionAndDomains) updating domain version to 1 and domains map to {'9b2fdce9-a795-44a7-86ed-99e514d24399': 'active', 'ea499cc0-684f-4a2f-abef-43ff28d649cb': 'active'} Thread-13::INFO::2014-11-25 14:55:37,880::spbackends::433::Storage.StoragePoolMemoryBackend::(setDomainsMap) new storage pool master version 1 and domains map {'9b2fdce9-a795-44a7-86ed-99e514d24399': 'Active', 'ea499cc0-684f-4a2f-abef-43ff28d649cb': 'Active'} Thread-13::INFO::2014-11-25 14:55:37,880::sp::133::Storage.StoragePool::(setBackend) updating pool f4966810-0923-4752-a054-2b4954584970 backend from type NoneType instance 0x33bbd72210 to type StoragePoolMemoryBackend instance 0x7f43d0055158 Thread-13::INFO::2014-11-25 14:55:37,880::sp::631::Storage.StoragePool::(connect) Connect host #1 to the storage pool f4966810-0923-4752-a054-2b4954584970 with master domain: ea499cc0-684f-4a2f-abef-43ff28d649cb (ver = 1) Thread-13::DEBUG::2014-11-25 14:55:37,880::lvm::489::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex Thread-13::DEBUG::2014-11-25 14:55:37,881::lvm::491::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex Thread-13::DEBUG::2014-11-25 14:55:37,881::lvm::500::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex Thread-13::DEBUG::2014-11-25 14:55:37,881::lvm::502::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex Thread-13::DEBUG::2014-11-25 14:55:37,881::lvm::520::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex Thread-13::DEBUG::2014-11-25 14:55:37,881::lvm::522::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex Thread-13::DEBUG::2014-11-25 14:55:37,882::misc::741::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage) Thread-13::DEBUG::2014-11-25 14:55:37,882::misc::743::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-13::DEBUG::2014-11-25 14:55:37,882::misc::741::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan) Thread-13::DEBUG::2014-11-25 14:55:37,882::misc::743::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-13::DEBUG::2014-11-25 14:55:37,882::iscsi::403::Storage.ISCSI::(rescan) Performing SCSI scan, this will take up to 30 seconds Thread-13::DEBUG::2014-11-25 14:55:37,882::iscsiadm::92::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None) Thread-13::DEBUG::2014-11-25 14:55:37,894::misc::751::Storage.SamplingMethod::(__call__) Returning last result Thread-13::DEBUG::2014-11-25 14:55:37,896::multipath::110::Storage.Misc.excCmd::(rescan) /usr/bin/sudo -n /sbin/multipath (cwd None) Thread-13::DEBUG::2014-11-25 14:55:37,981::multipath::110::Storage.Misc.excCmd::(rescan) SUCCESS: <err> = ''; <rc> = 0 Thread-13::DEBUG::2014-11-25 14:55:37,983::lvm::489::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex Thread-13::DEBUG::2014-11-25 14:55:37,983::lvm::491::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex Thread-13::DEBUG::2014-11-25 14:55:37,983::lvm::500::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex Thread-13::DEBUG::2014-11-25 14:55:37,984::lvm::502::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex Thread-13::DEBUG::2014-11-25 14:55:37,984::lvm::520::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex Thread-13::DEBUG::2014-11-25 14:55:37,984::lvm::522::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex Thread-13::DEBUG::2014-11-25 14:55:37,984::misc::751::Storage.SamplingMethod::(__call__) Returning last result Thread-14::DEBUG::2014-11-25 14:55:37,986::__init__::232::IOProcess::(_processLogs) Receiving request... Thread-14::DEBUG::2014-11-25 14:55:37,986::__init__::232::IOProcess::(_processLogs) Queuing request in the thread pool... Thread-14::DEBUG::2014-11-25 14:55:37,986::__init__::232::IOProcess::(_processLogs) Extracting request information... Thread-14::DEBUG::2014-11-25 14:55:37,986::__init__::232::IOProcess::(_processLogs) (5) Got request for method 'glob' Thread-14::DEBUG::2014-11-25 14:55:37,997::__init__::232::IOProcess::(_processLogs) (5) Queuing response Thread-13::DEBUG::2014-11-25 14:55:37,999::fileSD::152::Storage.StorageDomain::(__init__) Reading domain in path /rhev/data-center/mnt/glusterSD/10.10.10.3:_data1/ea499cc0-684f-4a2f-abef-43ff28d649cb Thread-14::DEBUG::2014-11-25 14:55:38,000::__init__::232::IOProcess::(_processLogs) Receiving request... Thread-14::DEBUG::2014-11-25 14:55:38,000::__init__::232::IOProcess::(_processLogs) Queuing request in the thread pool... Thread-14::DEBUG::2014-11-25 14:55:38,000::__init__::232::IOProcess::(_processLogs) Extracting request information... Thread-14::DEBUG::2014-11-25 14:55:38,000::__init__::232::IOProcess::(_processLogs) (6) Got request for method 'touch' Thread-14::DEBUG::2014-11-25 14:55:38,002::__init__::232::IOProcess::(_processLogs) (6) Queuing response Thread-13::DEBUG::2014-11-25 14:55:38,002::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with FileMetadataRW backend Thread-14::DEBUG::2014-11-25 14:55:38,003::__init__::232::IOProcess::(_processLogs) Receiving request... Thread-14::DEBUG::2014-11-25 14:55:38,003::__init__::232::IOProcess::(_processLogs) Queuing request in the thread pool... Thread-14::DEBUG::2014-11-25 14:55:38,004::__init__::232::IOProcess::(_processLogs) Extracting request information... Thread-14::DEBUG::2014-11-25 14:55:38,004::__init__::232::IOProcess::(_processLogs) (7) Got request for method 'readfile' Thread-14::DEBUG::2014-11-25 14:55:38,018::__init__::232::IOProcess::(_processLogs) (7) Queuing response Thread-13::DEBUG::2014-11-25 14:55:38,019::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Data', 'DESCRIPTION=data', 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'MASTER_VERSION=1', 'POOL_DESCRIPTION=SV_WTC', 'POOL_DOMAINS=9b2fdce9-a795-44a7-86ed-99e514d24399:Active,ea499cc0-684f-4a2f-abef-43ff28d649cb:Active,4b7c7f94-2cee-4fdc-9153-c2f8cd1f7c6b:Active', 'POOL_SPM_ID=4', 'POOL_SPM_LVER=6', 'POOL_UUID=f4966810-0923-4752-a054-2b4954584970', 'REMOTE_PATH=10.10.10.3:/data1', 'ROLE=Master', 'SDUUID=ea499cc0-684f-4a2f-abef-43ff28d649cb', 'TYPE=GLUSTERFS', 'VERSION=3', '_SHA_CKSUM=45470801303345e61389c0718708ed07c20ebe23'] Thread-14::DEBUG::2014-11-25 14:55:38,019::__init__::232::IOProcess::(_processLogs) Receiving request... Thread-14::DEBUG::2014-11-25 14:55:38,020::__init__::232::IOProcess::(_processLogs) Queuing request in the thread pool... Thread-14::DEBUG::2014-11-25 14:55:38,020::__init__::232::IOProcess::(_processLogs) Extracting request information... Thread-14::DEBUG::2014-11-25 14:55:38,020::__init__::232::IOProcess::(_processLogs) (8) Got request for method 'access' Thread-14::DEBUG::2014-11-25 14:55:38,020::__init__::232::IOProcess::(_processLogs) (8) Queuing response Thread-14::DEBUG::2014-11-25 14:55:38,021::__init__::232::IOProcess::(_processLogs) Receiving request... Thread-14::DEBUG::2014-11-25 14:55:38,021::__init__::232::IOProcess::(_processLogs) Queuing request in the thread pool... Thread-14::DEBUG::2014-11-25 14:55:38,021::__init__::232::IOProcess::(_processLogs) Extracting request information... Thread-14::DEBUG::2014-11-25 14:55:38,021::__init__::232::IOProcess::(_processLogs) (9) Got request for method 'glob' Thread-14::DEBUG::2014-11-25 14:55:38,032::__init__::232::IOProcess::(_processLogs) (9) Queuing response Thread-13::DEBUG::2014-11-25 14:55:38,033::fileSD::610::Storage.StorageDomain::(imageGarbageCollector) Removing remnants of deleted images [] Thread-13::DEBUG::2014-11-25 14:55:38,033::resourceManager::421::Storage.ResourceManager::(registerNamespace) Registering namespace 'ea499cc0-684f-4a2f-abef-43ff28d649cb_imageNS' Thread-13::DEBUG::2014-11-25 14:55:38,033::resourceManager::421::Storage.ResourceManager::(registerNamespace) Registering namespace 'ea499cc0-684f-4a2f-abef-43ff28d649cb_volumeNS' Thread-13::DEBUG::2014-11-25 14:55:38,034::sp::1390::Storage.StoragePool::(setMasterDomain) Master domain ea499cc0-684f-4a2f-abef-43ff28d649cb verified, version 1 Thread-13::DEBUG::2014-11-25 14:55:38,034::misc::741::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.sp.updateMonitoringThreads) Thread-13::DEBUG::2014-11-25 14:55:38,034::misc::743::Storage.SamplingMethod::(__call__) Got in to sampling method Thread-13::INFO::2014-11-25 14:55:38,034::domainMonitor::100::Storage.DomainMonitor::(startMonitoring) Start monitoring 9b2fdce9-a795-44a7-86ed-99e514d24399 Thread-13::DEBUG::2014-11-25 14:55:38,035::sp::1413::Storage.StoragePool::(updateMonitoringThreads) Storage Pool `f4966810-0923-4752-a054-2b4954584970` started monitoring domain `9b2fdce9-a795-44a7-86ed-99e514d24399` Thread-17::DEBUG::2014-11-25 14:55:38,035::domainMonitor::182::Storage.DomainMonitorThread::(_monitorLoop) Starting domain monitor for 9b2fdce9-a795-44a7-86ed-99e514d24399 Thread-13::INFO::2014-11-25 14:55:38,036::domainMonitor::100::Storage.DomainMonitor::(startMonitoring) Start monitoring ea499cc0-684f-4a2f-abef-43ff28d649cb Thread-18::DEBUG::2014-11-25 14:55:38,038::domainMonitor::182::Storage.DomainMonitorThread::(_monitorLoop) Starting domain monitor for ea499cc0-684f-4a2f-abef-43ff28d649cb Thread-13::DEBUG::2014-11-25 14:55:38,038::sp::1413::Storage.StoragePool::(updateMonitoringThreads) Storage Pool `f4966810-0923-4752-a054-2b4954584970` started monitoring domain `ea499cc0-684f-4a2f-abef-43ff28d649cb` Thread-13::DEBUG::2014-11-25 14:55:38,043::misc::751::Storage.SamplingMethod::(__call__) Returning last result Thread-13::DEBUG::2014-11-25 14:55:38,044::fileUtils::142::Storage.fileUtils::(createdir) Creating directory: /rhev/data-center/f4966810-0923-4752-a054-2b4954584970 Thread-14::DEBUG::2014-11-25 14:55:38,044::__init__::232::IOProcess::(_processLogs) Receiving request... Thread-13::WARNING::2014-11-25 14:55:38,045::fileUtils::149::Storage.fileUtils::(createdir) Dir /rhev/data-center/f4966810-0923-4752-a054-2b4954584970 already exists Thread-14::DEBUG::2014-11-25 14:55:38,045::__init__::232::IOProcess::(_processLogs) Queuing request in the thread pool... Thread-14::DEBUG::2014-11-25 14:55:38,046::__init__::232::IOProcess::(_processLogs) Extracting request information... Thread-14::DEBUG::2014-11-25 14:55:38,047::__init__::232::IOProcess::(_processLogs) (10) Got request for method 'glob' Thread-13::DEBUG::2014-11-25 14:55:38,048::lvm::365::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex Thread-14::DEBUG::2014-11-25 14:55:38,049::__init__::232::IOProcess::(_processLogs) (10) Queuing response Thread-14::DEBUG::2014-11-25 14:55:38,051::__init__::232::IOProcess::(_processLogs) Receiving request... Thread-14::DEBUG::2014-11-25 14:55:38,053::__init__::232::IOProcess::(_processLogs) Queuing request in the thread pool... Thread-13::DEBUG::2014-11-25 14:55:38,054::lvm::288::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/brick7|/dev/mapper/brick8|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name 9b2fdce9-a795-44a7-86ed-99e514d24399 (cwd None) Thread-14::DEBUG::2014-11-25 14:55:38,056::__init__::232::IOProcess::(_processLogs) Extracting request information... Thread-14::DEBUG::2014-11-25 14:55:38,063::__init__::232::IOProcess::(_processLogs) (11) Got request for method 'glob' Thread-14::DEBUG::2014-11-25 14:55:38,064::__init__::232::IOProcess::(_processLogs) (11) Queuing response Thread-14::DEBUG::2014-11-25 14:55:38,067::__init__::232::IOProcess::(_processLogs) Receiving request... Thread-14::DEBUG::2014-11-25 14:55:38,069::__init__::232::IOProcess::(_processLogs) Queuing request in the thread pool... Thread-14::DEBUG::2014-11-25 14:55:38,070::__init__::232::IOProcess::(_processLogs) Extracting request information... Thread-17::DEBUG::2014-11-25 14:55:38,071::fileSD::152::Storage.StorageDomain::(__init__) Reading domain in path /rhev/data-center/mnt/43.252.176.12:_var_lib_exports_iso/9b2fdce9-a795-44a7-86ed-99e514d24399 Thread-14::DEBUG::2014-11-25 14:55:38,071::__init__::232::IOProcess::(_processLogs) (12) Got request for method 'statvfs' Thread-14::DEBUG::2014-11-25 14:55:38,073::__init__::232::IOProcess::(_processLogs) (12) Queuing response Thread-14::DEBUG::2014-11-25 14:55:38,074::__init__::232::IOProcess::(_processLogs) Receiving request... Thread-14::DEBUG::2014-11-25 14:55:38,075::__init__::232::IOProcess::(_processLogs) Queuing request in the thread pool... Thread-18::DEBUG::2014-11-25 14:55:38,075::fileSD::261::Storage.Misc.excCmd::(getReadDelay) /bin/dd if=/rhev/data-center/mnt/glusterSD/10.10.10.3:_data1/ea499cc0-684f-4a2f-abef-43ff28d649cb/dom_md/metadata iflag=direct of=/dev/null bs=4096 count=1 (cwd None) Thread-14::DEBUG::2014-11-25 14:55:38,076::__init__::232::IOProcess::(_processLogs) Extracting request information... Thread-14::DEBUG::2014-11-25 14:55:38,082::__init__::232::IOProcess::(_processLogs) (13) Got request for method 'touch' Thread-18::DEBUG::2014-11-25 14:55:38,083::fileSD::261::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n540 bytes (540 B) copied, 0.000480957 s, 1.1 MB/s\n'; <rc> = 0 Thread-14::DEBUG::2014-11-25 14:55:38,084::__init__::232::IOProcess::(_processLogs) (13) Queuing response Thread-17::DEBUG::2014-11-25 14:55:38,087::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with FileMetadataRW backend Thread-14::DEBUG::2014-11-25 14:55:38,088::__init__::232::IOProcess::(_processLogs) Receiving request... Thread-14::DEBUG::2014-11-25 14:55:38,089::__init__::232::IOProcess::(_processLogs) Queuing request in the thread pool... Thread-14::DEBUG::2014-11-25 14:55:38,089::__init__::232::IOProcess::(_processLogs) Extracting request information... Thread-14::DEBUG::2014-11-25 14:55:38,089::__init__::232::IOProcess::(_processLogs) (14) Got request for method 'statvfs' Thread-14::DEBUG::2014-11-25 14:55:38,089::__init__::232::IOProcess::(_processLogs) (14) Queuing response Thread-14::DEBUG::2014-11-25 14:55:38,090::__init__::232::IOProcess::(_processLogs) Receiving request... Thread-14::DEBUG::2014-11-25 14:55:38,091::__init__::232::IOProcess::(_processLogs) Queuing request in the thread pool... Thread-14::DEBUG::2014-11-25 14:55:38,091::__init__::232::IOProcess::(_processLogs) Extracting request information... Thread-14::DEBUG::2014-11-25 14:55:38,091::__init__::232::IOProcess::(_processLogs) (15) Got request for method 'readfile' Thread-14::DEBUG::2014-11-25 14:55:38,092::__init__::232::IOProcess::(_processLogs) (15) Queuing response Thread-17::DEBUG::2014-11-25 14:55:38,092::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Iso', 'DESCRIPTION=ISO_DOMAIN', 'IOOPTIMEOUTSEC=1', 'LEASERETRIES=3', 'LEASETIMESEC=5', 'LOCKPOLICY=', 'LOCKRENEWALINTERVALSEC=5', 'MASTER_VERSION=0', 'POOL_UUID=00000002-0002-0002-0002-0000000003a1,9dadc868-d291-43fb-be72-ad4b1a5554f8,f4966810-0923-4752-a054-2b4954584970', 'REMOTE_PATH=cloud-mgmt.3linux.com:/var/lib/exports/iso', 'ROLE=Regular', 'SDUUID=9b2fdce9-a795-44a7-86ed-99e514d24399', 'TYPE=NFS', 'VERSION=0', '_SHA_CKSUM=e931f9ce795e9fbe179b80f741831be137b45fd6'] Thread-14::DEBUG::2014-11-25 14:55:38,093::__init__::232::IOProcess::(_processLogs) Receiving request... Thread-14::DEBUG::2014-11-25 14:55:38,094::__init__::232::IOProcess::(_processLogs) Queuing request in the thread pool... Thread-14::DEBUG::2014-11-25 14:55:38,095::__init__::232::IOProcess::(_processLogs) Extracting request information... Thread-14::DEBUG::2014-11-25 14:55:38,095::__init__::232::IOProcess::(_processLogs) (16) Got request for method 'access' Thread-14::DEBUG::2014-11-25 14:55:38,095::__init__::232::IOProcess::(_processLogs) (16) Queuing response Thread-14::DEBUG::2014-11-25 14:55:38,096::__init__::232::IOProcess::(_processLogs) Receiving request... Thread-14::DEBUG::2014-11-25 14:55:38,097::__init__::232::IOProcess::(_processLogs) Queuing request in the thread pool... Thread-14::DEBUG::2014-11-25 14:55:38,097::__init__::232::IOProcess::(_processLogs) Extracting request information... Thread-14::DEBUG::2014-11-25 14:55:38,097::__init__::232::IOProcess::(_processLogs) (17) Got request for method 'access' Thread-14::DEBUG::2014-11-25 14:55:38,097::__init__::232::IOProcess::(_processLogs) (17) Queuing response Thread-14::DEBUG::2014-11-25 14:55:38,098::__init__::232::IOProcess::(_processLogs) Receiving request... Thread-14::DEBUG::2014-11-25 14:55:38,098::__init__::232::IOProcess::(_processLogs) Queuing request in the thread pool... Thread-14::DEBUG::2014-11-25 14:55:38,098::__init__::232::IOProcess::(_processLogs) Extracting request information... Thread-14::DEBUG::2014-11-25 14:55:38,099::__init__::232::IOProcess::(_processLogs) (18) Got request for method 'access' Thread-14::DEBUG::2014-11-25 14:55:38,099::__init__::232::IOProcess::(_processLogs) (18) Queuing response Thread-14::DEBUG::2014-11-25 14:55:38,099::__init__::232::IOProcess::(_processLogs) Receiving request... Thread-14::DEBUG::2014-11-25 14:55:38,100::__init__::232::IOProcess::(_processLogs) Queuing request in the thread pool... Thread-14::DEBUG::2014-11-25 14:55:38,100::__init__::232::IOProcess::(_processLogs) Extracting request information... Thread-14::DEBUG::2014-11-25 14:55:38,100::__init__::232::IOProcess::(_processLogs) (19) Got request for method 'glob' Thread-14::DEBUG::2014-11-25 14:55:38,100::__init__::232::IOProcess::(_processLogs) (19) Queuing response Thread-17::DEBUG::2014-11-25 14:55:38,101::fileSD::610::Storage.StorageDomain::(imageGarbageCollector) Removing remnants of deleted images [] Thread-14::DEBUG::2014-11-25 14:55:38,101::__init__::232::IOProcess::(_processLogs) Receiving request... Thread-14::DEBUG::2014-11-25 14:55:38,103::__init__::232::IOProcess::(_processLogs) Queuing request in the thread pool... Thread-17::DEBUG::2014-11-25 14:55:38,103::resourceManager::421::Storage.ResourceManager::(registerNamespace) Registering namespace '9b2fdce9-a795-44a7-86ed-99e514d24399_imageNS' Thread-14::DEBUG::2014-11-25 14:55:38,104::__init__::232::IOProcess::(_processLogs) Extracting request information... Thread-17::DEBUG::2014-11-25 14:55:38,105::resourceManager::421::Storage.ResourceManager::(registerNamespace) Registering namespace '9b2fdce9-a795-44a7-86ed-99e514d24399_volumeNS' Thread-14::DEBUG::2014-11-25 14:55:38,105::__init__::232::IOProcess::(_processLogs) (20) Got request for method 'access' Thread-14::DEBUG::2014-11-25 14:55:38,107::__init__::232::IOProcess::(_processLogs) (20) Queuing response Thread-14::DEBUG::2014-11-25 14:55:38,108::__init__::232::IOProcess::(_processLogs) Receiving request... Thread-14::DEBUG::2014-11-25 14:55:38,108::__init__::232::IOProcess::(_processLogs) Queuing request in the thread pool... Thread-14::DEBUG::2014-11-25 14:55:38,109::__init__::232::IOProcess::(_processLogs) Extracting request information... Thread-14::DEBUG::2014-11-25 14:55:38,109::__init__::232::IOProcess::(_processLogs) (21) Got request for method 'statvfs' Thread-14::DEBUG::2014-11-25 14:55:38,109::__init__::232::IOProcess::(_processLogs) (21) Queuing response Thread-18::DEBUG::2014-11-25 14:55:38,111::domainMonitor::264::Storage.DomainMonitorThread::(_monitorDomain) Domain ea499cc0-684f-4a2f-abef-43ff28d649cb changed its status to Valid Thread-21::DEBUG::2014-11-25 14:55:38,113::misc::821::Storage.Event.Storage.DomainMonitor.onDomainStateChange::(_emit) Emitting event Thread-18::INFO::2014-11-25 14:55:38,113::clusterlock::222::Storage.SANLock::(acquireHostId) Acquiring host id for domain ea499cc0-684f-4a2f-abef-43ff28d649cb (id: 1) Thread-21::DEBUG::2014-11-25 14:55:38,115::misc::831::Storage.Event.Storage.DomainMonitor.onDomainStateChange::(_emit) Calling registered method `contEIOVms` Thread-17::DEBUG::2014-11-25 14:55:38,115::fileSD::261::Storage.Misc.excCmd::(getReadDelay) /bin/dd if=/rhev/data-center/mnt/43.252.176.12:_var_lib_exports_iso/9b2fdce9-a795-44a7-86ed-99e514d24399/dom_md/metadata iflag=direct of=/dev/null bs=4096 count=1 (cwd None) Thread-18::DEBUG::2014-11-25 14:55:38,116::clusterlock::240::Storage.SANLock::(acquireHostId) Host id for domain ea499cc0-684f-4a2f-abef-43ff28d649cb successfully acquired (id: 1) Thread-21::DEBUG::2014-11-25 14:55:38,122::misc::841::Storage.Event.Storage.DomainMonitor.onDomainStateChange::(_emit) Event emitted Thread-22::INFO::2014-11-25 14:55:38,124::clientIF::136::vds::(contEIOVms) vmContainerLock acquired Thread-17::DEBUG::2014-11-25 14:55:38,126::fileSD::261::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n438 bytes (438 B) copied, 0.000245464 s, 1.8 MB/s\n'; <rc> = 0 Thread-14::DEBUG::2014-11-25 14:55:38,126::__init__::232::IOProcess::(_processLogs) Receiving request... Thread-14::DEBUG::2014-11-25 14:55:38,127::__init__::232::IOProcess::(_processLogs) Queuing request in the thread pool... Thread-14::DEBUG::2014-11-25 14:55:38,127::__init__::232::IOProcess::(_processLogs) Extracting request information... Thread-14::DEBUG::2014-11-25 14:55:38,127::__init__::232::IOProcess::(_processLogs) (22) Got request for method 'statvfs' Thread-14::DEBUG::2014-11-25 14:55:38,127::__init__::232::IOProcess::(_processLogs) (22) Queuing response Thread-17::DEBUG::2014-11-25 14:55:38,128::domainMonitor::264::Storage.DomainMonitorThread::(_monitorDomain) Domain 9b2fdce9-a795-44a7-86ed-99e514d24399 changed its status to Valid Thread-23::DEBUG::2014-11-25 14:55:38,128::misc::821::Storage.Event.Storage.DomainMonitor.onDomainStateChange::(_emit) Emitting event Thread-23::DEBUG::2014-11-25 14:55:38,129::misc::831::Storage.Event.Storage.DomainMonitor.onDomainStateChange::(_emit) Calling registered method `contEIOVms` Thread-23::DEBUG::2014-11-25 14:55:38,129::misc::841::Storage.Event.Storage.DomainMonitor.onDomainStateChange::(_emit) Event emitted Thread-24::INFO::2014-11-25 14:55:38,130::clientIF::136::vds::(contEIOVms) vmContainerLock acquired Thread-13::DEBUG::2014-11-25 14:55:38,336::lvm::288::Storage.Misc.excCmd::(cmd) FAILED: <err> = ' Volume group "9b2fdce9-a795-44a7-86ed-99e514d24399" not found\n Skipping volume group 9b2fdce9-a795-44a7-86ed-99e514d24399\n'; <rc> = 5 Thread-13::WARNING::2014-11-25 14:55:38,337::lvm::370::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 [] [' Volume group "9b2fdce9-a795-44a7-86ed-99e514d24399" not found', ' Skipping volume group 9b2fdce9-a795-44a7-86ed-99e514d24399'] Thread-13::DEBUG::2014-11-25 14:55:38,337::lvm::407::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex Thread-14::DEBUG::2014-11-25 14:55:38,340::__init__::232::IOProcess::(_processLogs) Receiving request... Thread-14::DEBUG::2014-11-25 14:55:38,340::__init__::232::IOProcess::(_processLogs) Queuing request in the thread pool... Thread-14::DEBUG::2014-11-25 14:55:38,340::__init__::232::IOProcess::(_processLogs) Extracting request information... Thread-14::DEBUG::2014-11-25 14:55:38,340::__init__::232::IOProcess::(_processLogs) (23) Got request for method 'glob' Thread-14::DEBUG::2014-11-25 14:55:38,341::__init__::232::IOProcess::(_processLogs) (23) Queuing response Thread-14::DEBUG::2014-11-25 14:55:38,341::__init__::232::IOProcess::(_processLogs) Receiving request... Thread-14::DEBUG::2014-11-25 14:55:38,342::__init__::232::IOProcess::(_processLogs) Queuing request in the thread pool... Thread-14::DEBUG::2014-11-25 14:55:38,342::__init__::232::IOProcess::(_processLogs) Extracting request information... Thread-14::DEBUG::2014-11-25 14:55:38,342::__init__::232::IOProcess::(_processLogs) (24) Got request for method 'glob' Thread-14::DEBUG::2014-11-25 14:55:38,348::__init__::232::IOProcess::(_processLogs) (24) Queuing response Thread-13::INFO::2014-11-25 14:55:38,349::sp::1120::Storage.StoragePool::(_linkStorageDomain) Linking /rhev/data-center/mnt/43.252.176.12:_var_lib_exports_iso/9b2fdce9-a795-44a7-86ed-99e514d24399 to /rhev/data-center/f4966810-0923-4752-a054-2b4954584970/9b2fdce9-a795-44a7-86ed-99e514d24399 Thread-13::DEBUG::2014-11-25 14:55:38,349::sp::1137::Storage.StoragePool::(_linkStorageDomain) Creating symlink from /rhev/data-center/mnt/43.252.176.12:_var_lib_exports_iso/9b2fdce9-a795-44a7-86ed-99e514d24399 to /rhev/data-center/f4966810-0923-4752-a054-2b4954584970/9b2fdce9-a795-44a7-86ed-99e514d24399 Thread-13::INFO::2014-11-25 14:55:38,350::sp::1120::Storage.StoragePool::(_linkStorageDomain) Linking /rhev/data-center/mnt/glusterSD/10.10.10.3:_data1/ea499cc0-684f-4a2f-abef-43ff28d649cb to /rhev/data-center/f4966810-0923-4752-a054-2b4954584970/ea499cc0-684f-4a2f-abef-43ff28d649cb Thread-13::DEBUG::2014-11-25 14:55:38,350::sp::1137::Storage.StoragePool::(_linkStorageDomain) Creating symlink from /rhev/data-center/mnt/glusterSD/10.10.10.3:_data1/ea499cc0-684f-4a2f-abef-43ff28d649cb to /rhev/data-center/f4966810-0923-4752-a054-2b4954584970/ea499cc0-684f-4a2f-abef-43ff28d649cb Thread-13::INFO::2014-11-25 14:55:38,350::sp::1120::Storage.StoragePool::(_linkStorageDomain) Linking /rhev/data-center/mnt/glusterSD/10.10.10.3:_data1/ea499cc0-684f-4a2f-abef-43ff28d649cb to /rhev/data-center/f4966810-0923-4752-a054-2b4954584970/mastersd Thread-13::DEBUG::2014-11-25 14:55:38,350::sp::1137::Storage.StoragePool::(_linkStorageDomain) Creating symlink from /rhev/data-center/mnt/glusterSD/10.10.10.3:_data1/ea499cc0-684f-4a2f-abef-43ff28d649cb to /rhev/data-center/f4966810-0923-4752-a054-2b4954584970/mastersd Thread-13::DEBUG::2014-11-25 14:55:38,358::threadPool::35::Storage.ThreadPool::(__init__) Enter - numThreads: 5, waitTimeout: 3, maxTasks: 500 Thread-13::DEBUG::2014-11-25 14:55:38,361::storage_mailbox::84::Storage.Misc.excCmd::(_mboxExecCmd) /bin/dd if=/rhev/data-center/f4966810-0923-4752-a054-2b4954584970/mastersd/dom_md/outbox iflag=direct,fullblock bs=512 count=8 skip=8 (cwd None) Thread-13::DEBUG::2014-11-25 14:55:38,373::storage_mailbox::84::Storage.Misc.excCmd::(_mboxExecCmd) SUCCESS: <err> = '8+0 records in\n8+0 records out\n4096 bytes (4.1 kB) copied, 0.00722556 s, 567 kB/s\n'; <rc> = 0 Thread-13::INFO::2014-11-25 14:55:38,373::storage_mailbox::385::Storage.MailBox.HsmMailMonitor::(_sendMail) HSM_MailMonitor sending mail to SPM - ['/bin/dd', 'of=/rhev/data-center/f4966810-0923-4752-a054-2b4954584970/mastersd/dom_md/inbox', 'iflag=fullblock', 'oflag=direct', 'conv=notrunc', 'bs=512', 'seek=8'] Thread-13::DEBUG::2014-11-25 14:55:38,374::storage_mailbox::84::Storage.Misc.excCmd::(_mboxExecCmd) /bin/dd of=/rhev/data-center/f4966810-0923-4752-a054-2b4954584970/mastersd/dom_md/inbox iflag=fullblock oflag=direct conv=notrunc bs=512 seek=8 (cwd None) Thread-13::DEBUG::2014-11-25 14:55:38,381::storage_mailbox::84::Storage.Misc.excCmd::(_mboxExecCmd) SUCCESS: <err> = '8+0 records in\n8+0 records out\n4096 bytes (4.1 kB) copied, 0.00236082 s, 1.7 MB/s\n'; <rc> = 0 Thread-13::DEBUG::2014-11-25 14:55:38,382::storage_mailbox::207::Storage.Mailbox.HSM::(__init__) HSM_MailboxMonitor created for pool f4966810-0923-4752-a054-2b4954584970 Thread-13::DEBUG::2014-11-25 14:55:38,382::sp::462::Storage.StoragePool::(__createMailboxMonitor) HSM mailbox ready for pool f4966810-0923-4752-a054-2b4954584970 on master domain ea499cc0-684f-4a2f-abef-43ff28d649cb Thread-13::DEBUG::2014-11-25 14:55:38,382::resourceManager::616::Storage.ResourceManager::(releaseResource) Trying to release resource 'Storage.f4966810-0923-4752-a054-2b4954584970' Thread-13::DEBUG::2014-11-25 14:55:38,382::resourceManager::635::Storage.ResourceManager::(releaseResource) Released resource 'Storage.f4966810-0923-4752-a054-2b4954584970' (0 active users) Thread-13::DEBUG::2014-11-25 14:55:38,383::resourceManager::641::Storage.ResourceManager::(releaseResource) Resource 'Storage.f4966810-0923-4752-a054-2b4954584970' is free, finding out if anyone is waiting for it. Thread-13::DEBUG::2014-11-25 14:55:38,383::resourceManager::649::Storage.ResourceManager::(releaseResource) No one is waiting for resource 'Storage.f4966810-0923-4752-a054-2b4954584970', Clearing records. Thread-13::DEBUG::2014-11-25 14:55:38,383::resourceManager::616::Storage.ResourceManager::(releaseResource) Trying to release resource 'Storage.HsmDomainMonitorLock' Thread-13::DEBUG::2014-11-25 14:55:38,383::resourceManager::635::Storage.ResourceManager::(releaseResource) Released resource 'Storage.HsmDomainMonitorLock' (0 active users) Thread-13::DEBUG::2014-11-25 14:55:38,383::resourceManager::641::Storage.ResourceManager::(releaseResource) Resource 'Storage.HsmDomainMonitorLock' is free, finding out if anyone is waiting for it. Thread-13::DEBUG::2014-11-25 14:55:38,383::resourceManager::649::Storage.ResourceManager::(releaseResource) No one is waiting for resource 'Storage.HsmDomainMonitorLock', Clearing records. Thread-13::INFO::2014-11-25 14:55:38,384::logUtils::47::dispatcher::(wrapper) Run and protect: connectStoragePool, Return response: True Thread-13::DEBUG::2014-11-25 14:55:38,384::task::1191::Storage.TaskManager.Task::(prepare) Task=`0bbd8080-ab4c-49a2-9708-49e6e5bd0b78`::finished: True Thread-13::DEBUG::2014-11-25 14:55:38,384::task::595::Storage.TaskManager.Task::(_updateState) Task=`0bbd8080-ab4c-49a2-9708-49e6e5bd0b78`::moving from state preparing -> state finished Thread-13::DEBUG::2014-11-25 14:55:38,384::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-13::DEBUG::2014-11-25 14:55:38,384::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-13::DEBUG::2014-11-25 14:55:38,385::task::993::Storage.TaskManager.Task::(_decref) Task=`0bbd8080-ab4c-49a2-9708-49e6e5bd0b78`::ref 0 aborting False Thread-13::DEBUG::2014-11-25 14:55:38,435::task::595::Storage.TaskManager.Task::(_updateState) Task=`bcfa1276-fd54-4abc-9f99-b4f0d5a65bc1`::moving from state init -> state preparing Thread-13::INFO::2014-11-25 14:55:38,435::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-13::INFO::2014-11-25 14:55:38,435::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {'9b2fdce9-a795-44a7-86ed-99e514d24399': {'code': 0, 'version': 0, 'acquired': True, 'delay': '0.000245464', 'lastCheck': '0.3', 'valid': True}, 'ea499cc0-684f-4a2f-abef-43ff28d649cb': {'code': 0, 'version': 3, 'acquired': False, 'delay': '0.000480957', 'lastCheck': '0.3', 'valid': True}} Thread-13::DEBUG::2014-11-25 14:55:38,435::task::1191::Storage.TaskManager.Task::(prepare) Task=`bcfa1276-fd54-4abc-9f99-b4f0d5a65bc1`::finished: {'9b2fdce9-a795-44a7-86ed-99e514d24399': {'code': 0, 'version': 0, 'acquired': True, 'delay': '0.000245464', 'lastCheck': '0.3', 'valid': True}, 'ea499cc0-684f-4a2f-abef-43ff28d649cb': {'code': 0, 'version': 3, 'acquired': False, 'delay': '0.000480957', 'lastCheck': '0.3', 'valid': True}} Thread-13::DEBUG::2014-11-25 14:55:38,435::task::595::Storage.TaskManager.Task::(_updateState) Task=`bcfa1276-fd54-4abc-9f99-b4f0d5a65bc1`::moving from state preparing -> state finished Thread-13::DEBUG::2014-11-25 14:55:38,435::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-13::DEBUG::2014-11-25 14:55:38,436::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-13::DEBUG::2014-11-25 14:55:38,436::task::993::Storage.TaskManager.Task::(_decref) Task=`bcfa1276-fd54-4abc-9f99-b4f0d5a65bc1`::ref 0 aborting False Thread-13::DEBUG::2014-11-25 14:55:38,877::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call setMOMPolicyParameters with ({'balloonEnabled': False, 'ksmEnabled': False},) {} flowID [5d2c0061] Thread-13::DEBUG::2014-11-25 14:55:38,951::BindingXMLRPC::1139::vds::(wrapper) return setMOMPolicyParameters with {'status': {'message': 'Done', 'code': 0}} Thread-13::DEBUG::2014-11-25 14:55:39,011::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call hostUUIDGet with () {} flowID [5d2c0061] Thread-13::DEBUG::2014-11-25 14:55:39,041::BindingXMLRPC::1139::vds::(wrapper) return hostUUIDGet with {'status': {'message': 'Done', 'code': 0}, 'uuid': '918cef1f-ed83-4570-90ad-39c5ceec753c'} Thread-13::DEBUG::2014-11-25 14:55:41,730::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call volumesList with () {} Thread-13::DEBUG::2014-11-25 14:55:41,740::BindingXMLRPC::1139::vds::(wrapper) return volumesList with {'status': {'message': 'Done', 'code': 0}, 'volumes': {'data1': {'transportType': ['TCP'], 'uuid': 'e6117925-79b1-417b-9d07-cfc31f68bc51', 'bricks': ['compute1.3linux.com:/brick1', 'compute2.3linux.com:/brick3', 'compute3.3linux.com:/brick5', 'compute4.3linux.com:/brick7', 'compute1.3linux.com:/brick2', 'compute2.3linux.com:/brick4', 'compute3.3linux.com:/brick6', 'compute4.3linux.com:/brick8'], 'volumeName': 'data1', 'volumeType': 'DISTRIBUTED_REPLICATE', 'replicaCount': '2', 'brickCount': '8', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [{'name': 'compute1.3linux.com:/brick1', 'hostUuid': '69b19f4e-03c8-4fe1-a644-2701c41d7020'}, {'name': 'compute2.3linux.com:/brick3', 'hostUuid': '2cf3428c-62be-42f4-86d3-ef67e7dfc97f'}, {'name': 'compute3.3linux.com:/brick5', 'hostUuid': '0d94d896-e994-4deb-b1a1-d6f68be86c30'}, {'name': 'compute4.3linux.com:/brick7', 'hostUuid': '918cef1f-ed83-4570-90ad-39c5ceec753c'}, {'name': 'compute1.3linux.com:/brick2', 'hostUuid': '69b19f4e-03c8-4fe1-a644-2701c41d7020'}, {'name': 'compute2.3linux.com:/brick4', 'hostUuid': '2cf3428c-62be-42f4-86d3-ef67e7dfc97f'}, {'name': 'compute3.3linux.com:/brick6', 'hostUuid': '0d94d896-e994-4deb-b1a1-d6f68be86c30'}, {'name': 'compute4.3linux.com:/brick8', 'hostUuid': '918cef1f-ed83-4570-90ad-39c5ceec753c'}], 'options': {'cluster.server-quorum-type': 'server', 'cluster.eager-lock': 'enable', 'performance.stat-prefetch': 'off', 'auth.allow': '*', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'network.remote-dio': 'enable', 'nfs.disable': 'off', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'user.cifs': 'enable', 'performance.read-ahead': 'off', 'storage.owner-gid': '36'}}}} Thread-13::DEBUG::2014-11-25 14:55:47,078::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call volumesList with () {} Thread-13::DEBUG::2014-11-25 14:55:47,089::BindingXMLRPC::1139::vds::(wrapper) return volumesList with {'status': {'message': 'Done', 'code': 0}, 'volumes': {'data1': {'transportType': ['TCP'], 'uuid': 'e6117925-79b1-417b-9d07-cfc31f68bc51', 'bricks': ['compute1.3linux.com:/brick1', 'compute2.3linux.com:/brick3', 'compute3.3linux.com:/brick5', 'compute4.3linux.com:/brick7', 'compute1.3linux.com:/brick2', 'compute2.3linux.com:/brick4', 'compute3.3linux.com:/brick6', 'compute4.3linux.com:/brick8'], 'volumeName': 'data1', 'volumeType': 'DISTRIBUTED_REPLICATE', 'replicaCount': '2', 'brickCount': '8', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [{'name': 'compute1.3linux.com:/brick1', 'hostUuid': '69b19f4e-03c8-4fe1-a644-2701c41d7020'}, {'name': 'compute2.3linux.com:/brick3', 'hostUuid': '2cf3428c-62be-42f4-86d3-ef67e7dfc97f'}, {'name': 'compute3.3linux.com:/brick5', 'hostUuid': '0d94d896-e994-4deb-b1a1-d6f68be86c30'}, {'name': 'compute4.3linux.com:/brick7', 'hostUuid': '918cef1f-ed83-4570-90ad-39c5ceec753c'}, {'name': 'compute1.3linux.com:/brick2', 'hostUuid': '69b19f4e-03c8-4fe1-a644-2701c41d7020'}, {'name': 'compute2.3linux.com:/brick4', 'hostUuid': '2cf3428c-62be-42f4-86d3-ef67e7dfc97f'}, {'name': 'compute3.3linux.com:/brick6', 'hostUuid': '0d94d896-e994-4deb-b1a1-d6f68be86c30'}, {'name': 'compute4.3linux.com:/brick8', 'hostUuid': '918cef1f-ed83-4570-90ad-39c5ceec753c'}], 'options': {'cluster.server-quorum-type': 'server', 'cluster.eager-lock': 'enable', 'performance.stat-prefetch': 'off', 'auth.allow': '*', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'network.remote-dio': 'enable', 'nfs.disable': 'off', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'user.cifs': 'enable', 'performance.read-ahead': 'off', 'storage.owner-gid': '36'}}}} Thread-14::DEBUG::2014-11-25 14:55:48,124::__init__::232::IOProcess::(_processLogs) Receiving request... Thread-14::DEBUG::2014-11-25 14:55:48,124::__init__::232::IOProcess::(_processLogs) Queuing request in the thread pool... Thread-14::DEBUG::2014-11-25 14:55:48,124::__init__::232::IOProcess::(_processLogs) Extracting request information... Thread-14::DEBUG::2014-11-25 14:55:48,125::__init__::232::IOProcess::(_processLogs) (25) Got request for method 'statvfs' Thread-14::DEBUG::2014-11-25 14:55:48,129::__init__::232::IOProcess::(_processLogs) Receiving request... Thread-14::DEBUG::2014-11-25 14:55:48,129::__init__::232::IOProcess::(_processLogs) Queuing request in the thread pool... Thread-14::DEBUG::2014-11-25 14:55:48,129::__init__::232::IOProcess::(_processLogs) Extracting request information... Thread-14::DEBUG::2014-11-25 14:55:48,130::__init__::232::IOProcess::(_processLogs) (26) Got request for method 'statvfs' Thread-14::DEBUG::2014-11-25 14:55:48,130::__init__::232::IOProcess::(_processLogs) (25) Queuing response Thread-14::DEBUG::2014-11-25 14:55:48,131::__init__::232::IOProcess::(_processLogs) (26) Queuing response Thread-18::DEBUG::2014-11-25 14:55:48,133::fileSD::261::Storage.Misc.excCmd::(getReadDelay) /bin/dd if=/rhev/data-center/mnt/glusterSD/10.10.10.3:_data1/ea499cc0-684f-4a2f-abef-43ff28d649cb/dom_md/metadata iflag=direct of=/dev/null bs=4096 count=1 (cwd None) Thread-17::DEBUG::2014-11-25 14:55:48,139::fileSD::261::Storage.Misc.excCmd::(getReadDelay) /bin/dd if=/rhev/data-center/mnt/43.252.176.12:_var_lib_exports_iso/9b2fdce9-a795-44a7-86ed-99e514d24399/dom_md/metadata iflag=direct of=/dev/null bs=4096 count=1 (cwd None) Thread-18::DEBUG::2014-11-25 14:55:48,144::fileSD::261::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n540 bytes (540 B) copied, 0.000493893 s, 1.1 MB/s\n'; <rc> = 0 Thread-17::DEBUG::2014-11-25 14:55:48,147::fileSD::261::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n438 bytes (438 B) copied, 0.000305819 s, 1.4 MB/s\n'; <rc> = 0 Thread-14::DEBUG::2014-11-25 14:55:48,148::__init__::232::IOProcess::(_processLogs) Receiving request... Thread-14::DEBUG::2014-11-25 14:55:48,149::__init__::232::IOProcess::(_processLogs) Queuing request in the thread pool... Thread-14::DEBUG::2014-11-25 14:55:48,149::__init__::232::IOProcess::(_processLogs) Extracting request information... Thread-14::DEBUG::2014-11-25 14:55:48,150::__init__::232::IOProcess::(_processLogs) (27) Got request for method 'statvfs' Thread-14::DEBUG::2014-11-25 14:55:48,150::__init__::232::IOProcess::(_processLogs) (27) Queuing response Thread-14::DEBUG::2014-11-25 14:55:48,151::__init__::232::IOProcess::(_processLogs) Receiving request... Thread-14::DEBUG::2014-11-25 14:55:48,152::__init__::232::IOProcess::(_processLogs) Queuing request in the thread pool... Thread-14::DEBUG::2014-11-25 14:55:48,152::__init__::232::IOProcess::(_processLogs) Extracting request information... Thread-14::DEBUG::2014-11-25 14:55:48,152::__init__::232::IOProcess::(_processLogs) (28) Got request for method 'statvfs' Thread-14::DEBUG::2014-11-25 14:55:48,152::__init__::232::IOProcess::(_processLogs) (28) Queuing response Thread-14::DEBUG::2014-11-25 14:55:48,153::__init__::232::IOProcess::(_processLogs) Receiving request... Thread-14::DEBUG::2014-11-25 14:55:48,155::__init__::232::IOProcess::(_processLogs) Queuing request in the thread pool... Thread-14::DEBUG::2014-11-25 14:55:48,155::__init__::232::IOProcess::(_processLogs) Extracting request information... Thread-14::DEBUG::2014-11-25 14:55:48,156::__init__::232::IOProcess::(_processLogs) (29) Got request for method 'access' Thread-14::DEBUG::2014-11-25 14:55:48,156::__init__::232::IOProcess::(_processLogs) (29) Queuing response Thread-14::DEBUG::2014-11-25 14:55:48,157::__init__::232::IOProcess::(_processLogs) Receiving request... Thread-14::DEBUG::2014-11-25 14:55:48,157::__init__::232::IOProcess::(_processLogs) Queuing request in the thread pool... Thread-14::DEBUG::2014-11-25 14:55:48,157::__init__::232::IOProcess::(_processLogs) Extracting request information... Thread-14::DEBUG::2014-11-25 14:55:48,157::__init__::232::IOProcess::(_processLogs) (30) Got request for method 'access' Thread-14::DEBUG::2014-11-25 14:55:48,157::__init__::232::IOProcess::(_processLogs) (30) Queuing response Thread-14::DEBUG::2014-11-25 14:55:48,158::__init__::232::IOProcess::(_processLogs) Receiving request... Thread-14::DEBUG::2014-11-25 14:55:48,158::__init__::232::IOProcess::(_processLogs) Queuing request in the thread pool... Thread-14::DEBUG::2014-11-25 14:55:48,159::__init__::232::IOProcess::(_processLogs) Extracting request information... Thread-14::DEBUG::2014-11-25 14:55:48,159::__init__::232::IOProcess::(_processLogs) (31) Got request for method 'access' Thread-14::DEBUG::2014-11-25 14:55:48,159::__init__::232::IOProcess::(_processLogs) (31) Queuing response Thread-13::DEBUG::2014-11-25 14:55:52,416::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call volumesList with () {} Thread-13::DEBUG::2014-11-25 14:55:52,429::BindingXMLRPC::1139::vds::(wrapper) return volumesList with {'status': {'message': 'Done', 'code': 0}, 'volumes': {'data1': {'transportType': ['TCP'], 'uuid': 'e6117925-79b1-417b-9d07-cfc31f68bc51', 'bricks': ['compute1.3linux.com:/brick1', 'compute2.3linux.com:/brick3', 'compute3.3linux.com:/brick5', 'compute4.3linux.com:/brick7', 'compute1.3linux.com:/brick2', 'compute2.3linux.com:/brick4', 'compute3.3linux.com:/brick6', 'compute4.3linux.com:/brick8'], 'volumeName': 'data1', 'volumeType': 'DISTRIBUTED_REPLICATE', 'replicaCount': '2', 'brickCount': '8', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [{'name': 'compute1.3linux.com:/brick1', 'hostUuid': '69b19f4e-03c8-4fe1-a644-2701c41d7020'}, {'name': 'compute2.3linux.com:/brick3', 'hostUuid': '2cf3428c-62be-42f4-86d3-ef67e7dfc97f'}, {'name': 'compute3.3linux.com:/brick5', 'hostUuid': '0d94d896-e994-4deb-b1a1-d6f68be86c30'}, {'name': 'compute4.3linux.com:/brick7', 'hostUuid': '918cef1f-ed83-4570-90ad-39c5ceec753c'}, {'name': 'compute1.3linux.com:/brick2', 'hostUuid': '69b19f4e-03c8-4fe1-a644-2701c41d7020'}, {'name': 'compute2.3linux.com:/brick4', 'hostUuid': '2cf3428c-62be-42f4-86d3-ef67e7dfc97f'}, {'name': 'compute3.3linux.com:/brick6', 'hostUuid': '0d94d896-e994-4deb-b1a1-d6f68be86c30'}, {'name': 'compute4.3linux.com:/brick8', 'hostUuid': '918cef1f-ed83-4570-90ad-39c5ceec753c'}], 'options': {'cluster.server-quorum-type': 'server', 'cluster.eager-lock': 'enable', 'performance.stat-prefetch': 'off', 'auth.allow': '*', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'network.remote-dio': 'enable', 'nfs.disable': 'off', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'user.cifs': 'enable', 'performance.read-ahead': 'off', 'storage.owner-gid': '36'}}}} Thread-13::DEBUG::2014-11-25 14:55:54,321::task::595::Storage.TaskManager.Task::(_updateState) Task=`6ff3a7bd-adb7-4730-b885-4940f9f85a9b`::moving from state init -> state preparing Thread-13::INFO::2014-11-25 14:55:54,321::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-13::INFO::2014-11-25 14:55:54,322::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {'9b2fdce9-a795-44a7-86ed-99e514d24399': {'code': 0, 'version': 0, 'acquired': True, 'delay': '0.000305819', 'lastCheck': '6.2', 'valid': True}, 'ea499cc0-684f-4a2f-abef-43ff28d649cb': {'code': 0, 'version': 3, 'acquired': False, 'delay': '0.000493893', 'lastCheck': '6.2', 'valid': True}} Thread-13::DEBUG::2014-11-25 14:55:54,322::task::1191::Storage.TaskManager.Task::(prepare) Task=`6ff3a7bd-adb7-4730-b885-4940f9f85a9b`::finished: {'9b2fdce9-a795-44a7-86ed-99e514d24399': {'code': 0, 'version': 0, 'acquired': True, 'delay': '0.000305819', 'lastCheck': '6.2', 'valid': True}, 'ea499cc0-684f-4a2f-abef-43ff28d649cb': {'code': 0, 'version': 3, 'acquired': False, 'delay': '0.000493893', 'lastCheck': '6.2', 'valid': True}} Thread-13::DEBUG::2014-11-25 14:55:54,322::task::595::Storage.TaskManager.Task::(_updateState) Task=`6ff3a7bd-adb7-4730-b885-4940f9f85a9b`::moving from state preparing -> state finished Thread-13::DEBUG::2014-11-25 14:55:54,322::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-13::DEBUG::2014-11-25 14:55:54,322::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-13::DEBUG::2014-11-25 14:55:54,322::task::993::Storage.TaskManager.Task::(_decref) Task=`6ff3a7bd-adb7-4730-b885-4940f9f85a9b`::ref 0 aborting False Thread-13::DEBUG::2014-11-25 14:55:57,749::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call volumesList with () {} Thread-13::DEBUG::2014-11-25 14:55:57,761::BindingXMLRPC::1139::vds::(wrapper) return volumesList with {'status': {'message': 'Done', 'code': 0}, 'volumes': {'data1': {'transportType': ['TCP'], 'uuid': 'e6117925-79b1-417b-9d07-cfc31f68bc51', 'bricks': ['compute1.3linux.com:/brick1', 'compute2.3linux.com:/brick3', 'compute3.3linux.com:/brick5', 'compute4.3linux.com:/brick7', 'compute1.3linux.com:/brick2', 'compute2.3linux.com:/brick4', 'compute3.3linux.com:/brick6', 'compute4.3linux.com:/brick8'], 'volumeName': 'data1', 'volumeType': 'DISTRIBUTED_REPLICATE', 'replicaCount': '2', 'brickCount': '8', 'distCount': '2', 'volumeStatus': 'ONLINE', 'stripeCount': '1', 'bricksInfo': [{'name': 'compute1.3linux.com:/brick1', 'hostUuid': '69b19f4e-03c8-4fe1-a644-2701c41d7020'}, {'name': 'compute2.3linux.com:/brick3', 'hostUuid': '2cf3428c-62be-42f4-86d3-ef67e7dfc97f'}, {'name': 'compute3.3linux.com:/brick5', 'hostUuid': '0d94d896-e994-4deb-b1a1-d6f68be86c30'}, {'name': 'compute4.3linux.com:/brick7', 'hostUuid': '918cef1f-ed83-4570-90ad-39c5ceec753c'}, {'name': 'compute1.3linux.com:/brick2', 'hostUuid': '69b19f4e-03c8-4fe1-a644-2701c41d7020'}, {'name': 'compute2.3linux.com:/brick4', 'hostUuid': '2cf3428c-62be-42f4-86d3-ef67e7dfc97f'}, {'name': 'compute3.3linux.com:/brick6', 'hostUuid': '0d94d896-e994-4deb-b1a1-d6f68be86c30'}, {'name': 'compute4.3linux.com:/brick8', 'hostUuid': '918cef1f-ed83-4570-90ad-39c5ceec753c'}], 'options': {'cluster.server-quorum-type': 'server', 'cluster.eager-lock': 'enable', 'performance.stat-prefetch': 'off', 'auth.allow': '*', 'cluster.quorum-type': 'auto', 'performance.quick-read': 'off', 'network.remote-dio': 'enable', 'nfs.disable': 'off', 'performance.io-cache': 'off', 'storage.owner-uid': '36', 'user.cifs': 'enable', 'performance.read-ahead': 'off', 'storage.owner-gid': '36'}}}} Thread-14::DEBUG::2014-11-25 14:55:58,156::__init__::232::IOProcess::(_processLogs) Receiving request... Thread-14::DEBUG::2014-11-25 14:55:58,156::__init__::232::IOProcess::(_processLogs) Queuing request in the thread pool... Thread-14::DEBUG::2014-11-25 14:55:58,157::__init__::232::IOProcess::(_processLogs) Extracting request information... Thread-14::DEBUG::2014-11-25 14:55:58,157::__init__::232::IOProcess::(_processLogs) (32) Got request for method 'statvfs' Thread-14::DEBUG::2014-11-25 14:55:58,157::__init__::232::IOProcess::(_processLogs) (32) Queuing response Thread-17::DEBUG::2014-11-25 14:55:58,158::fileSD::261::Storage.Misc.excCmd::(getReadDelay) /bin/dd if=/rhev/data-center/mnt/43.252.176.12:_var_lib_exports_iso/9b2fdce9-a795-44a7-86ed-99e514d24399/dom_md/metadata iflag=direct of=/dev/null bs=4096 count=1 (cwd None) Thread-17::DEBUG::2014-11-25 14:55:58,165::fileSD::261::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n438 bytes (438 B) copied, 0.000302494 s, 1.4 MB/s\n'; <rc> = 0 Thread-14::DEBUG::2014-11-25 14:55:58,166::__init__::232::IOProcess::(_processLogs) Receiving request... Thread-14::DEBUG::2014-11-25 14:55:58,167::__init__::232::IOProcess::(_processLogs) Queuing request in the thread pool... Thread-14::DEBUG::2014-11-25 14:55:58,168::__init__::232::IOProcess::(_processLogs) Extracting request information... Thread-14::DEBUG::2014-11-25 14:55:58,168::__init__::232::IOProcess::(_processLogs) (33) Got request for method 'statvfs' Thread-14::DEBUG::2014-11-25 14:55:58,168::__init__::232::IOProcess::(_processLogs) (33) Queuing response Thread-14::DEBUG::2014-11-25 14:55:58,169::__init__::232::IOProcess::(_processLogs) Receiving request... Thread-18::DEBUG::2014-11-25 14:55:58,170::fileSD::261::Storage.Misc.excCmd::(getReadDelay) /bin/dd if=/rhev/data-center/mnt/glusterSD/10.10.10.3:_data1/ea499cc0-684f-4a2f-abef-43ff28d649cb/dom_md/metadata iflag=direct of=/dev/null bs=4096 count=1 (cwd None) Thread-14::DEBUG::2014-11-25 14:55:58,170::__init__::232::IOProcess::(_processLogs) Queuing request in the thread pool... Thread-14::DEBUG::2014-11-25 14:55:58,176::__init__::232::IOProcess::(_processLogs) Extracting request information... Thread-14::DEBUG::2014-11-25 14:55:58,176::__init__::232::IOProcess::(_processLogs) (34) Got request for method 'statvfs' Thread-14::DEBUG::2014-11-25 14:55:58,176::__init__::232::IOProcess::(_processLogs) (34) Queuing response Thread-18::DEBUG::2014-11-25 14:55:58,179::fileSD::261::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n540 bytes (540 B) copied, 0.00041774 s, 1.3 MB/s\n'; <rc> = 0 Thread-14::DEBUG::2014-11-25 14:55:58,179::__init__::232::IOProcess::(_processLogs) Receiving request... Thread-14::DEBUG::2014-11-25 14:55:58,179::__init__::232::IOProcess::(_processLogs) Queuing request in the thread pool... Thread-14::DEBUG::2014-11-25 14:55:58,180::__init__::232::IOProcess::(_processLogs) Extracting request information... Thread-14::DEBUG::2014-11-25 14:55:58,180::__init__::232::IOProcess::(_processLogs) (35) Got request for method 'statvfs' Thread-14::DEBUG::2014-11-25 14:55:58,180::__init__::232::IOProcess::(_processLogs) (35) Queuing response Thread-14::DEBUG::2014-11-25 14:55:58,181::__init__::232::IOProcess::(_processLogs) Receiving request... Thread-14::DEBUG::2014-11-25 14:55:58,181::__init__::232::IOProcess::(_processLogs) Queuing request in the thread pool... Thread-14::DEBUG::2014-11-25 14:55:58,181::__init__::232::IOProcess::(_processLogs) Extracting request information... Thread-14::DEBUG::2014-11-25 14:55:58,181::__init__::232::IOProcess::(_processLogs) (36) Got request for method 'access' Thread-14::DEBUG::2014-11-25 14:55:58,182::__init__::232::IOProcess::(_processLogs) (36) Queuing response Thread-14::DEBUG::2014-11-25 14:55:58,182::__init__::232::IOProcess::(_processLogs) Receiving request... Thread-14::DEBUG::2014-11-25 14:55:58,183::__init__::232::IOProcess::(_processLogs) Queuing request in the thread pool... Thread-14::DEBUG::2014-11-25 14:55:58,183::__init__::232::IOProcess::(_processLogs) Extracting request information... Thread-14::DEBUG::2014-11-25 14:55:58,183::__init__::232::IOProcess::(_processLogs) (37) Got request for method 'access' Thread-14::DEBUG::2014-11-25 14:55:58,183::__init__::232::IOProcess::(_processLogs) (37) Queuing response Thread-14::DEBUG::2014-11-25 14:55:58,184::__init__::232::IOProcess::(_processLogs) Receiving request... Thread-14::DEBUG::2014-11-25 14:55:58,184::__init__::232::IOProcess::(_processLogs) Queuing request in the thread pool... Thread-14::DEBUG::2014-11-25 14:55:58,184::__init__::232::IOProcess::(_processLogs) Extracting request information... Thread-14::DEBUG::2014-11-25 14:55:58,184::__init__::232::IOProcess::(_processLogs) (38) Got request for method 'access' Thread-14::DEBUG::2014-11-25 14:55:58,185::__init__::232::IOProcess::(_processLogs) (38) Queuing response
VDSM Logs :- Thread-13::DEBUG::2014-11-25 14:50:19,926::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call getCapabilities with () {} Thread-13::DEBUG::2014-11-25 14:50:19,928::utils::738::root::(execCmd) /sbin/ip route show to 0.0.0.0/0 table all (cwd None) Thread-13::DEBUG::2014-11-25 14:50:19,934::utils::758::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-13::DEBUG::2014-11-25 14:50:19,961::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift',) not found Thread-13::DEBUG::2014-11-25 14:50:19,965::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-object',) not found Thread-13::DEBUG::2014-11-25 14:50:19,966::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-plugin',) not found Thread-13::DEBUG::2014-11-25 14:50:19,970::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-account',) not found Thread-13::DEBUG::2014-11-25 14:50:19,971::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-proxy',) not found Thread-13::DEBUG::2014-11-25 14:50:19,971::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-doc',) not found Thread-13::DEBUG::2014-11-25 14:50:19,971::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-container',) not found Thread-13::DEBUG::2014-11-25 14:50:19,972::caps::728::root::(_getKeyPackages) rpm package ('glusterfs-geo-replication',) not found Thread-13::DEBUG::2014-11-25 14:50:19,972::caps::646::root::(get) VirtioRNG DISABLED: libvirt version 0.10.2-29.el6_5.9 required >= 0.10.2-31 Thread-13::DEBUG::2014-11-25 14:50:19,975::BindingXMLRPC::1139::vds::(wrapper) return getCapabilities with {'status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory': {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:32151ce183c8'}], 'FC': []}, 'packages2': {'kernel': {'release': '431.el6.x86_64', 'buildtime': 1385061309.0, 'version': '2.6.32'}, 'glusterfs-rdma': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}, 'glusterfs-fuse': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}, 'spice-server': {'release': '6.el6_5.2', 'buildtime': 1402324637L, 'version': '0.12.4'}, 'vdsm': {'release': '1.gitdb83943.el6', 'buildtime': 1412784567L, 'version': '4.16.7'}, 'qemu-kvm': {'release': '2.415.el6_5.10', 'buildtime': 1402435700L, 'version': '0.12.1.2'}, 'qemu-img': {'release': '2.415.el6_5.10', 'buildtime': 1402435700L, 'version': '0.12.1.2'}, 'libvirt': {'release': '29.el6_5.9', 'buildtime': 1402404612L, 'version': '0.10.2'}, 'glusterfs': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}, 'mom': {'release': '2.el6', 'buildtime': 1403794344L, 'version': '0.4.1'}, 'glusterfs-server': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}}, 'numaNodeDistance': {'1': [20, 10], '0': [10, 20]}, 'cpuModel': 'Intel(R) Xeon(R) CPU X5650 @ 2.67GHz', 'liveMerge': 'false', 'hooks': {}, 'cpuSockets': '2', 'vmTypes': ['kvm'], 'selinux': {'mode': '1'}, 'kdumpStatus': 0, 'supportedProtocols': ['2.2', '2.3'], 'networks': {'ovirtmgmt': {'iface': u'bond0.10', 'addr': '43.252.176.16', 'bridged': False, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'bootproto4': 'none', 'netmask': '255.255.255.0', 'ipv4addrs': ['43.252.176.16/24'], 'interface': u'bond0.10', 'ipv6gateway': '::', 'gateway': '43.252.176.1'}, 'Internal': {'iface': 'Internal', 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'bridged': True, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'gateway': '', 'bootproto4': 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'mtu': '9000', 'ipv6gateway': '::', 'ports': ['bond1.100']}, 'storage': {'iface': u'bond1', 'addr': '10.10.10.6', 'bridged': False, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'bootproto4': 'none', 'netmask': '255.255.255.0', 'ipv4addrs': ['10.10.10.6/24'], 'interface': u'bond1', 'ipv6gateway': '::', 'gateway': ''}, 'VMNetwork': {'iface': 'VMNetwork', 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'bridged': True, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'gateway': '', 'bootproto4': 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'mtu': '1500', 'ipv6gateway': '::', 'ports': ['bond0.36']}}, 'bridges': {'Internal': {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'ipv6gateway': '::', 'gateway': '', 'opts': {'topology_change_detected': '0', 'multicast_last_member_count': '2', 'hash_elasticity': '4', 'multicast_query_response_interval': '999', 'multicast_snooping': '1', 'multicast_startup_query_interval': '3124', 'hello_timer': '186', 'multicast_querier_interval': '25496', 'max_age': '1999', 'hash_max': '512', 'stp_state': '0', 'root_id': '8000.001018cddaac', 'priority': '32768', 'multicast_membership_interval': '25996', 'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0', 'multicast_startup_query_count': '2', 'hello_time': '199', 'topology_change': '0', 'bridge_id': '8000.001018cddaac', 'topology_change_timer': '0', 'ageing_time': '29995', 'gc_timer': '86', 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', 'multicast_query_interval': '12498', 'multicast_last_member_interval': '99', 'multicast_router': '1', 'forward_delay': '0'}, 'ports': ['bond1.100']}, 'VMNetwork': {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'ipv6gateway': '::', 'gateway': '', 'opts': {'topology_change_detected': '0', 'multicast_last_member_count': '2', 'hash_elasticity': '4', 'multicast_query_response_interval': '999', 'multicast_snooping': '1', 'multicast_startup_query_interval': '3124', 'hello_timer': '84', 'multicast_querier_interval': '25496', 'max_age': '1999', 'hash_max': '512', 'stp_state': '0', 'root_id': '8000.60eb6920b46c', 'priority': '32768', 'multicast_membership_interval': '25996', 'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0', 'multicast_startup_query_count': '2', 'hello_time': '199', 'topology_change': '0', 'bridge_id': '8000.60eb6920b46c', 'topology_change_timer': '0', 'ageing_time': '29995', 'gc_timer': '84', 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', 'multicast_query_interval': '12498', 'multicast_last_member_interval': '99', 'multicast_router': '1', 'forward_delay': '0'}, 'ports': ['bond0.36']}}, 'uuid': '44454C4C-4C00-1057-8053-B7C04F504E31', 'lastClientIface': 'bond1', 'nics': {'eth3': {'permhwaddr': '00:10:18:cd:da:ae', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ae', 'MASTER': 'bond1', 'DEVICE': 'eth3', 'ONBOOT': 'no'}, 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '00:10:18:cd:da:ac', 'speed': 1000}, 'eth2': {'permhwaddr': '00:10:18:cd:da:ac', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ac', 'MASTER': 'bond1', 'DEVICE': 'eth2', 'ONBOOT': 'no'}, 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '00:10:18:cd:da:ac', 'speed': 1000}, 'eth1': {'permhwaddr': '60:eb:69:20:b4:6d', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6d', 'MASTER': 'bond0', 'DEVICE': 'eth1', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'speed': 1000}, 'eth0': {'permhwaddr': '60:eb:69:20:b4:6c', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6c', 'MASTER': 'bond0', 'DEVICE': 'eth0', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'speed': 1000}}, 'software_revision': '1', 'clusterLevels': ['3.0', '3.1', '3.2', '3.3', '3.4', '3.5'], 'cpuFlags': u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,popcnt,aes,lahf_lm,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,model_n270', 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:32151ce183c8', 'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1', '3.2', '3.3', '3.4', '3.5'], 'autoNumaBalancing': 2, 'reservedMem': '321', 'bondings': {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': {'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': 'no', 'BONDING_OPTS': 'mode=4 miimon=100', 'DEVICE': 'bond0', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'slaves': ['eth0', 'eth1'], 'opts': {'miimon': '100', 'mode': '4'}}, 'bond1': {'addr': '10.10.10.6', 'cfg': {'DEFROUTE': 'no', 'IPADDR': '10.10.10.6', 'HOTPLUG': 'no', 'MTU': '9000', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'BONDING_OPTS': 'mode=4 miimon=100', 'DEVICE': 'bond1', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'netmask': '255.255.255.0', 'ipv4addrs': ['10.10.10.6/24'], 'hwaddr': '00:10:18:cd:da:ac', 'slaves': ['eth2', 'eth3'], 'opts': {'miimon': '100', 'mode': '4'}}, 'bond2': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond3': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}}, 'software_version': '4.16', 'memSize': '24019', 'cpuSpeed': '2667.000', 'numaNodes': {u'1': {'totalMemory': '12288', 'cpus': [6, 7, 8, 9, 10, 11, 18, 19, 20, 21, 22, 23]}, u'0': {'totalMemory': '12278', 'cpus': [0, 1, 2, 3, 4, 5, 12, 13, 14, 15, 16, 17]}}, 'version_name': 'Snow Man', 'vlans': {'bond0.10': {'iface': 'bond0', 'addr': '43.252.176.16', 'cfg': {'DEFROUTE': 'yes', 'VLAN': 'yes', 'IPADDR': '43.252.176.16', 'HOTPLUG': 'no', 'GATEWAY': '43.252.176.1', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'DEVICE': 'bond0.10', 'MTU': '1500', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': 10, 'mtu': '1500', 'netmask': '255.255.255.0', 'ipv4addrs': ['43.252.176.16/24']}, 'bond0.36': {'iface': 'bond0', 'addr': '', 'cfg': {'BRIDGE': 'VMNetwork', 'VLAN': 'yes', 'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond0.36', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': 36, 'mtu': '1500', 'netmask': '', 'ipv4addrs': []}, 'bond1.100': {'iface': 'bond1', 'addr': '', 'cfg': {'BRIDGE': 'Internal', 'VLAN': 'yes', 'HOTPLUG': 'no', 'MTU': '9000', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond1.100', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'vlanid': 100, 'mtu': '9000', 'netmask': '', 'ipv4addrs': []}}, 'cpuCores': '12', 'kvmEnabled': 'true', 'guestOverhead': '65', 'cpuThreads': '24', 'emulatedMachines': [u'rhel6.5.0', u'pc', u'rhel6.4.0', u'rhel6.3.0', u'rhel6.2.0', u'rhel6.1.0', u'rhel6.0.0', u'rhel5.5.0', u'rhel5.4.4', u'rhel5.4.0'], 'operatingSystem': {'release': '5.el6.centos.11.1', 'version': '6', 'name': 'RHEL'}, 'lastClient': '10.10.10.2'}} Thread-13::DEBUG::2014-11-25 14:50:19,993::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call getHardwareInfo with () {} Thread-13::DEBUG::2014-11-25 14:50:19,994::BindingXMLRPC::1139::vds::(wrapper) return getHardwareInfo with {'status': {'message': 'Done', 'code': 0}, 'info': {'systemProductName': 'CS24-TY', 'systemSerialNumber': '7LWSPN1', 'systemFamily': 'Server', 'systemVersion': 'A00', 'systemUUID': '44454c4c-4c00-1057-8053-b7c04f504e31', 'systemManufacturer': 'Dell'}} Thread-13::DEBUG::2014-11-25 14:50:20,096::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call hostsList with () {} flowID [7af0ef43] Thread-13::ERROR::2014-11-25 14:50:23,104::BindingXMLRPC::1148::vds::(wrapper) vdsm exception occured Traceback (most recent call last): File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1135, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper rv = func(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList return {'hosts': self.svdsmProxy.glusterPeerStatus()} File "/usr/share/vdsm/supervdsm.py", line 50, in __call__ return callMethod() File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda> **kwargs) File "<string>", line 2, in glusterPeerStatus File "/usr/lib64/python2.6/multiprocessing/managers.py", line 740, in _callmethod raise convert_to_error(kind, result) GlusterCmdExecFailedException: Command execution failed error: Connection failed. Please check if gluster daemon is operational. return code: 1
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-users