Showing headlines posted by dba477

« Previous ( 1 ... 11 12 13 14 15 16 17 18 19 20 21 ... 28 ) Next »

Two Real Node (Controller+Compute) IceHouse Neutron OVS&GRE Cluster on Fedora 20

Two boxes have been setup , each one having 2 NICs (p37p1,p4p1) for Controller && Compute Nodes setup. Before running `packstack --answer-file= TwoRealNodeOVS&GRE.txt` SELINUX set to permissive on both nodes.Both p4p1's assigned IPs and set to promiscuous mode (192.168.0.127, 192.168.0.137 ). Services firewalld and NetworkManager disabled, IPv4 firewall with iptables and service network are enabled and running.

Two Real Node (Controller+Compute) IceHouse Neutron OVS&VLAN Cluster on Fedora 20 Setup

Two boxes , each one having 2 NICs (p37p1,p4p1) for (Controller+NeutronServer) && Compute Nodes have been setup. Setup configuration - Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin && VLAN ), Nova (nova-compute) and Compute node: Nova (nova-compute), Neutron (openvswitch-agent)

Two Node (Controller+Compute) IceHouse Neutron OVS&VLAN Cluster on Fedora 20

Two KVMs have been created on non-default libvirt's subnet , each one having 2 virtual NICs (eth0,eth1) for Controller and Compute Nodes setup. Before running `packstack --answer-file=twoNode-answer.txt` SELINUX set to permissive on both nodes. Interfaces eth1 on both nodes set to promiscuous mode (e.g. HWADDRESS was commented out). Controller (F20) still has to run `ifdown br-ex ; ifup br-ex` at boot up. RDO install went smoothly like on CentOS 6.5

Two Node (Controller+Compute) IceHouse Neutron OVS&VLAN Cluster on CentOS 6.5

Two KVMs have been created , each one having 2 virtual NICs (eth0,eth1) for Controller && Compute Nodes setup. Before running `packstack --answer-file=twoNode-answer.txt` SELINUX set to permissive on both nodes. Service NetworkManager disabled, service network enabled. Interfaces eth1 on both nodes set to promiscuous mode

UbuntuTrusty&&Cirros Cloud Instances (IceHouse) without floating IP working on the Net

This post is supposed to demonstrate that Neuton DHCP,MetaData,L3 agents (services) and OVS plugin properly configured in RDO IceHouse provide outbound connectivity for cloud instance upon creation without assigning this instance a floating IP.

Setup Horizon Dashboard-2014.1 on F20 Havana Controller

Recent upgrade firefox up to 29.0-5 on Fedora 20 causes to fail login to Dashboard Console for Havana F20 Controller been setup per 'VNC Console in Dashboard on Two Node Neutron GRE+OVS F20 Cluster' @Lxer.com. Procedure bellow, actually, does a backport F21 packages python-django-horizon-2104.1-1 , python-django-openstack-auth-1.1.5-1, python-pbr-0.7.0-2 via manual install of corresponding SRC.RPMs and invoking rpmbuild utility to produce F20 packages. The hard thing to know is which packages to backport ?

Basic troubleshooting AIO RDO Havana Instance on CentOS 6.5

This Howto is supposed to show up some internals which usually are not known by people just starting with AIO (--allinone) Havana setup on CentOS 6.5

RDO Havana Neutron&GRE L3 Layer Troubleshooting

This post follows up http://lxer.com/module/newswire/view/200975/index.html Qrouters namespaces external and internal interfaces are mapped to `ovs-vsctl show` output. Qdhcps namespaces interfaces are also mapped to `ovs-vsctl show` output. Traffic control is done via tcpdump on external and internal network interfaces of Qrouter's namespace associated with Network attached via external gateway to router with interface for private sub net having Ubuntu Trusty Server instance running `apt-get install` commands.

RDO Havana Neutron Namespaces Troubleshooting for OVS&VLAN (GRE) Config

The OpenStack Networking components are deployed on the Controller, Compute, and Network nodes in the following configuration . . .

Identifying and Troubleshooting Neutron Namespaces (OpenStack)

I believe this short notice is supposed to explain complicated concepts in pretty simple words. It's really hard to find such kind of HowTo on the Net.

HowTo access metadata from RDO Havana Instance on Fedora 20

Reproducing "Dirrect access to Nova metadata" I was able to get only list of EC2 metadata available, but not the values. However, the major concern is getting values of metadata obtained in review "Direct access to Nova metadata" and also at /openstack location. The last ones seem to me important not less then present in EC2. This metadata are also not provided by this list.

Attempt to reproduce Direct access to Nova metadata per Lars Kellogg-Stedman

Quoting http://blog.oddbit.com/2014/01/14/direct-access-to-nova-meta... In an environment running Neutron, a request from your instance must traverse a number of steps: From the instance to a router, Through a NAT rule in the router namespace, To an instance of the neutron-ns-metadata-proxy, To the actual Nova metadata service

Launching Instances via image creating simultaneously bootable cinder volume on Two Node GRE+OVS F20 Cluster

Article is about attempt to make working mentioned kind of `nova boot .. ` commands via CLI or Dashboard, launching Instances via image and creating simultaneously boot able cinder volumes (on glusterfs).

RDO Hangout: Multinode OpenStack with Packstack (February 27, 2014)

I am just wondering what is going to happen when by the end of 2014 similar "packstack multinode setup via standard prepared answer-files" will run on RHEL 7 & CentOS 7 (F19 based systems) servers landscapes. This slides http://oddbit.com/rdo-hangout-multinode-packstack-slides/ would work for almost any sysadmin, even has no idea what OS is really on top of OpenStack and what Juju is.

Setup Neutron&Dnsmasq on Two Node GRE+OVS F20 Cluster to launch VMs with MTU 1454 automatically

When you first time boot cloud instance (RDO Havana) via `nova boot ..` or via Dashboard associated with ssh keypair , the default MTU value for eth0 of instance has been built would be 1500. The last one in case of GRE tunnelling makes useless attempt to connect to instance via ssh.

VNC Console in Dashboard on Two Node Neutron GRE+OVS F20 Cluster (Revised)

  • http://bderzhavets.blogspot.com; By Boris Derzhavets (Posted by dba477 on Mar 16, 2014 1:14 AM EDT)
  • Story Type: Tutorial; Groups: Virtualization
This post follows up http://lxer.com/module/newswire/view/197613/index.html In particular, it could be performed after basic setup to make system management more comfortable then only CLI. For instance assigning floating IP is just a mouse click instead of shell script

Setup Gluster 3.4.2 on Two Node GRE+OVS F20 Cluster

  • http://bderzhavets.blogspot..com; By Boris Derzhavets (Posted by dba477 on Mar 9, 2014 8:34 PM EDT)
  • Story Type: Tutorial; Groups: Virtualization
This post is an update for http://bderzhavets.blogspot.com/2014/01/setting-up-two-physi... . It's focused on Gluster 3.4.2 implementation including tuning /etc/sysconfig/iptables files on Controller and Compute Nodes. copying ssh-key from master node to compute, step by step verification of gluster volume replica 2 functionality and switching RDO Havana cinder service to work with gluster volume created to store instances bootable cinders volumes for performance improvement.

Up to date procedure of creating cinder's ThinLVM based cloud instances on F20 Havana Compute Node

  • http://bderzhavets.blogspot..com; By Boris Derzhavets (Posted by dba477 on Mar 7, 2014 7:13 PM EDT)
  • Story Type: Tutorial; Groups: Virtualization
Schema bellow allows to avoid two commands . First one creating cinder volume and second one booting instance via volume created . It allows to create volume on F20 Controller node and boot instance via created volume on F20 Compute node simultaneously.

USB Redirection hack

  • http://bderzhavets.blogspot.com; By Boris Derzhavets (Posted by dba477 on Feb 28, 2014 2:38 PM EDT)
  • Story Type: Tutorial; Groups: Virtualization
I clearly understand that only incomplete Havana RDO setup allows me to activate spice USB redirection communicating with cloud instances. There is no dashboard ( Administrative Web Console ) on Cluster. All information regarding nova instances status, neutron subnets,routers,ports is supposed to be obtained via CLI as well as managing instances, subnets,routers,ports and rules is also supposed to be done via CLI, having carefully watch sourcing "keystonerc_user" file to manage in environment of particular user of particular tenant.

Ongoing testing

  • Xen Virtualization on Linux and Solaris; By Boris Derzhavets (Posted by dba477 on Feb 21, 2014 12:14 PM EDT)
  • Story Type: Tutorial; Groups: Red Hat
Following bellow was accepted as correct answer on 02/18/2014 at https://ask.openstack.org/en/question/9365/trouble-installin... Openstack Nova Software evaluated that my Compute Node may run at a time no more then 5 cloud instances. So , to create new one I just must have in `nova list` no more then four entries. Then I will be able create new one instance for sure. It has been tested on two "Two Node Neutron GRE+OVS Systems" ( watching in other window `tail -f /var/log/nova/scheduler.log` for ERRORS to connect AMQP Server) . It looks like AMQP Server eventually breaks connection to Nova & Neutron-server services, what may require service QPIDD restart and service openstack-nova-scheduler restart.

« Previous ( 1 ... 11 12 13 14 15 16 17 18 19 20 21 ... 28 ) Next »