Re: Essex EPEL Testing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi David,
   see below

On 04/25/2012 11:04 PM, Brown, David M JR wrote:
> Fedora Devs,
> 
> I just spent the last couple of days fighting with Essex on RHEL6 and its been entertaining and I'd like to share some of the oddities and experiences.
> 
> System configuration is the following.
> 
> Two nodes on their own /24 connected by cross over to each other on the second interface.
> The first node is the cloud controller and has tons of storage (11T) and 32Gb ram and 16 cores
> The second node I would like to make an extra compute node and it has 24Gb ram and 8 cores (still in a work in progress)
> 
> Originally the cloud controller was running Diablo on RHEL6 and was working fine.
> 
> I couldn't find any 'upgrade' instructions for going between Diablo and Essex and I wasn't too worried because the usage of the cloud was limited to just a couple of guys. So I was satisfied with backing up manually all the data and rebuild the cluster. I noticed when I did the update that things stopped working and following the install instructions blew away all local data in the cloud.
> 
> I was following the instructions found at the following URL.
> 
> http://fedoraproject.org/wiki/Getting_started_with_OpenStack_EPEL
> 
> I got the packages from
> 
> http://pbrady.fedorapeople.org/openstack-el6/
> 
> First issue. Wow, this is long, its almost long enough that making an uber script in a common package somewhere to run would strip out of most of the manual commands to run. I'd suggest first pulling out all the openstack-config-set commands and put them in a script to run. Not sure what to do about the swift documentation bits, that seems like a very manual set of configurations why aren't they part of the swift rpm? Another suggestion would be to split it out into a couple of documents one describing installation and configuration then the next describing putting data/users into it and starting stuff? thoughts?
> 
> After I got everything setup and working I noticed an issue with the dashboard, most of the static stuff wasn't showing up I had to add a symlink.
> /usr/share/openstack-dashboard/static -> openstack_dashboard/static
> Then the dashboard picked up the right stuff and it worked.
> 
> There's some consistency issues and I'm not sure if this is an openstack issue in general. The euca tools and how you configure them with keystone only seem to work with your personal instances and configuration. However, the dashboard seems to show users everything associated with the project instead. For example when I allocate floating IPs from the website those won't show up when I run euca-describe-addresses and respectively euca-allocate-address won't show up the IP allocated in the dashboard. I've looked at the database and the project ids are used when using the dashboard and user ids are used when using the euca tools. I think the euca tools could be setup to see everything that the dashboard sees however the documentation doesn't point to how to do that.
> 
> There also seems to be some serious functionality faults that I can't seem to make work. I can't make a user attached to multiple projects, not sure how to do that. Also, seems like there's a lot of, "huh, that doesn't seem implemented yet." However, this seems like a general openstack issue, documentation says X but that doesn't work yet or anymore.
> 
> I'm having a serious issue not getting a the second compute node working `nova-manage service list' doesn't show ':-)' for the compute and network services running on that node. I've followed the instructions to the letter and tried getting things working but its not going.
> 
> nova.conf for the controller.
> 
> [DEFAULT]
> logdir = /var/log/nova
> state_path = /var/lib/nova
> lock_path = /var/lib/nova/tmp
> dhcpbridge = /usr/bin/nova-dhcpbridge
> dhcpbridge_flagfile = /etc/nova/nova.conf
> force_dhcp_release = False
> injected_network_template = /usr/share/nova/interfaces.template
> libvirt_xml_template = /usr/share/nova/libvirt.xml.template
> libvirt_nonblocking = True
> vpn_client_template = /usr/share/nova/client.ovpn.template
> credentials_template = /usr/share/nova/novarc.template
> network_manager = nova.network.manager.FlatDHCPManager
> iscsi_helper = tgtadm
> sql_connection = mysql://nova:nova@localhost/nova
> connection_type = libvirt
> firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver
> rpc_backend = nova.rpc.impl_qpid
> root_helper = sudo nova-rootwrap
> auth_strategy = keystone
> public_interface = eth0
> quota_floating_ips = 100
> 
> nova.conf on compute node
> 
> [DEFAULT]
> logdir = /var/log/nova
> state_path = /var/lib/nova
> lock_path = /var/lib/nova/tmp
> dhcpbridge = /usr/bin/nova-dhcpbridge
> dhcpbridge_flagfile = /etc/nova/nova.conf
> force_dhcp_release = True
> injected_network_template = /usr/share/nova/interfaces.template
> libvirt_xml_template = /usr/share/nova/libvirt.xml.template
> libvirt_nonblocking = True
> vpn_client_template = /usr/share/nova/client.ovpn.template
> credentials_template = /usr/share/nova/novarc.template
> network_manager = nova.network.manager.FlatDHCPManager
> iscsi_helper = tgtadm
> sql_connection = mysql://nova:nova@CC_NAME/nova
> connection_type = libvirt
> firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver
> rpc_backend = nova.rpc.impl_qpid
> root_helper = sudo nova-rootwrap
> rabbit_host = CC_NAME
> glance_api_servers = CC_NAME:9292
> iscsi_ip_prefix = CC_ADDR
> public_interface = eth2
> verbose = True
> s3_host = CC_NAME
> ec2_api = CC_NAME
> ec2_url = http://CC_NAME:8773/services/Cloud
> fixed_range = 10.0.0.0/24
> network_size = 256
> 
> Any help would be helpful.

It looks to me like your missing qpid_hostname from the compute node,
because your using qpid as the rpc backend I think rabbit_host is
ignored, try qpid_hostname = <ipaddr>

That config param on the wiki needs to be updated, I'll sort that out
now, I noticed also it says to start the network service on the compute
node I don't think this is required (infact it might cause problems).
I'll run through this section of the wiki and see if anything else needs
updating. If you notice anything else yourself, feel free to update the
document or post here.

Hope this helps,
Thanks,
Derek.
_______________________________________________
cloud mailing list
cloud@xxxxxxxxxxxxxxxxxxxxxxx
https://admin.fedoraproject.org/mailman/listinfo/cloud



[Index of Archives]     [Fedora General Discussion]     [Older Fedora Users Archive]     [Fedora Advisory Board]     [Fedora Security]     [Fedora Devel Java]     [Fedora Legacy]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Mentors]     [Fedora Package Announce]     [Fedora Package Review]     [Fedora Music]     [Fedora Packaging]     [Centos]     [Fedora SELinux]     [Coolkey]     [Yum Users]     [Big List of Linux Books]     [Yosemite News]     [Linux Apps]     [KDE Users]     [Fedora Art]     [Fedora Docs]     [Asterisk PBX]

  Powered by Linux