Showing headlines posted by dba477

« Previous ( 1 ... 9 10 11 12 13 14 15 16 17 18 19 ... 27 ) Next »

LVMiSCSI cinder backend for RDO Juno on CentOS 7

Current post follows up http://lxer.com/module/newswire/view/207415/index.html RDO Juno has been intalled on Controller and Compute nodes via packstack as described in link @lxer.com. iSCSI initiator implementation on CentOS 7 differs significantly from CentOS 6.5 and is based on CLI utility targetcli and service target. With Enterprise Linux 7, both Red Hat and CentOS, there is a big change in the management of iSCSI targets.Software run as part of the standard systemd structure. Consequently there will be significant changes in multi back end cinder architecture of RDO Juno running on CentOS 7 or Fedora 21 utilizing LVM based iSCSI targets.

RDO Juno Set up Two Real Node (Controller+Compute) Gluster 3.5.2 Cluster ML2&OVS&VXLAN on CentOS 7

Post bellow follows up http://cloudssky.com/en/blog/RDO-OpenStack-Juno-ML2-VXLAN-2-... however answer file provided here allows in single run create Controller && Compute Node. Based oh RDO Juno release as of 10/27/2014 it doesn't require creating OVS bridge br-ex and OVS port enp2s0 on Compute Node.It also doesn't install nova-compute service on Controller. Gluster 3.5.2 setup also is performed in way which differs from similar procedure on IceHouse && Havana RDO releases.

Understanding Packet Flows in OpenStack Neutron

A Neutron setup is composed of numerous interfaces,such as br-int,br-tun,br-ex, eth1/2/3. For beginners it's usually hard to understand what route packets will take through these devices and hosts. So let's take a closer look

Setup QCOW2 standard CentOS 7 cloud image to work with 2 VLANs on IceHouse ML2&OVS&GRE System

Notice, that same schema would work for any F20 or Ubuntu QCOW2 cloud images via qemu-nbd mount and increasing number of NICs interface files up to 2,3,... Approach suggested down here is universal. Any cinder volume been built up on updated glance image ( 2 NICs ready ) would be 2 NICs ready as well

Setup CentOS 7 cloud instance on IceHouse Neutron ML2&OVS&GRE System

CentOS 7.0 qcow2 image for glance is available now at http://openstack.redhat.com/Image_resources. Regardless dhcp-option 26,1454 is setup on system current image loads with MTU 1500. Workaround for now is to launch instance with no ssh keypair and having postinstallation script

Setup Gluster 3.5.2 on Two Node Controller&Compute Neutron ML2&VXLAN&OVS CentOS 7 Cluster

This post is an update for previous one - RDO Setup Two Real Node (Controller+Compute) IceHouse Neutron ML2&OVS&VXLAN Cluster on CentOS 7 http://bderzhavets.blogspot.com/2014/07/rdo-setup-two-real-n... It's focused on Gluster 3.5.2 implementation including tuning /etc/sysconfig/iptables files on Controller and Compute Nodes CentOS 7.

RDO Setup Two Real Node (Controller+Compute) IceHouse Neutron ML2&OVS&VXLAN Cluster on CentOS 7

Two boxes have been setup , each one having 2 NICs (enp2s0,enp5s1) for Controller && Compute Nodes setup. Before running `packstack --answer-file=TwoNodeVXLAN.txt` SELINUX set to permissive on both nodes.Both enp5s1's assigned IPs and set to promiscuous mode (192.168.0.127, 192.168.0.137 ). Services firewalld and NetworkManager disabled, IPv4 firewall with iptables and service network are enabled and running. Packstack is bind to public IP of interface enp2s0 192.169.1.127, Compute Node is 192.169.1.137 ( view answer-file ).

RDO Setup Two Real Node (Controller+Compute) IceHouse Neutron ML2&OVS&VLAN Cluster on CentOS 7

As of 07/14/2014 Bug https://ask.openstack.org/en/question/35705/attempt-of-rdo-a... is still pending and workaround suggested above should be applied during two node RDO packstack installation.Successful implementation of Neutron ML2&&OVS&&VLAN multi node setup requires correct version of plugin.ini -> /etc/neutron/plugins/ml2/ml2_conf.ini which appears to be generated with errors by packstack.

Setup Light Weight X Windows environment (Enlightenment) on Fedora 20 Cloud instance

Needless to say that setting up Light Weight X environment on Fedora 20 cloud instances is very important for comfortable work in VM's environment, for instance on Ubuntu Trusty cloud server just one command installs E17 environment `apt-get install xorg e17 firefox`. By some reasons E17 was dropped from official F20 repos and maybe functional only via previous MATE Desktop setup on VM

RDO Setup Two Real Node (Controller+Compute) IceHouse Neutron ML2&OVS&VLAN Cluster on F20

Successful implementation of Neutron ML2&OVS&VLAN multi node setup requires correct version of plugin.ini -> /etc/neutron/plugins/ml2/ml2_conf.ini which appears to be generated with errors by packstack. Several days playing with plugin.ini allowed me to build properly working system

RDO Setup Two Real Node (Controller+Compute) IceHouse Neutron ML2&OVS&GRE Cluster on F20

Finally I've designed answer-file creating ml2_conf.ini -> /etc/neutron/plugins/ml2/ml2_conf.ini..

RDO IceHouse Setup Two Node Neutron ML2&OVS&GRE Cluster on Fedora 20

Two KVMs have been created , each one having 2 virtual NICs (eth0,eth1) for Controller && Compute Nodes setup.

Two Real Node (Controller+Compute) IceHouse Neutron OVS&GRE Cluster on Fedora 20

Two boxes have been setup , each one having 2 NICs (p37p1,p4p1) for Controller && Compute Nodes setup. Before running `packstack --answer-file= TwoRealNodeOVS&GRE.txt` SELINUX set to permissive on both nodes.Both p4p1's assigned IPs and set to promiscuous mode (192.168.0.127, 192.168.0.137 ). Services firewalld and NetworkManager disabled, IPv4 firewall with iptables and service network are enabled and running.

Two Real Node (Controller+Compute) IceHouse Neutron OVS&VLAN Cluster on Fedora 20 Setup

Two boxes , each one having 2 NICs (p37p1,p4p1) for (Controller+NeutronServer) && Compute Nodes have been setup. Setup configuration - Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin && VLAN ), Nova (nova-compute) and Compute node: Nova (nova-compute), Neutron (openvswitch-agent)

Two Node (Controller+Compute) IceHouse Neutron OVS&VLAN Cluster on Fedora 20

Two KVMs have been created on non-default libvirt's subnet , each one having 2 virtual NICs (eth0,eth1) for Controller and Compute Nodes setup. Before running `packstack --answer-file=twoNode-answer.txt` SELINUX set to permissive on both nodes. Interfaces eth1 on both nodes set to promiscuous mode (e.g. HWADDRESS was commented out). Controller (F20) still has to run `ifdown br-ex ; ifup br-ex` at boot up. RDO install went smoothly like on CentOS 6.5

Two Node (Controller+Compute) IceHouse Neutron OVS&VLAN Cluster on CentOS 6.5

Two KVMs have been created , each one having 2 virtual NICs (eth0,eth1) for Controller && Compute Nodes setup. Before running `packstack --answer-file=twoNode-answer.txt` SELINUX set to permissive on both nodes. Service NetworkManager disabled, service network enabled. Interfaces eth1 on both nodes set to promiscuous mode

UbuntuTrusty&&Cirros Cloud Instances (IceHouse) without floating IP working on the Net

This post is supposed to demonstrate that Neuton DHCP,MetaData,L3 agents (services) and OVS plugin properly configured in RDO IceHouse provide outbound connectivity for cloud instance upon creation without assigning this instance a floating IP.

Setup Horizon Dashboard-2014.1 on F20 Havana Controller

Recent upgrade firefox up to 29.0-5 on Fedora 20 causes to fail login to Dashboard Console for Havana F20 Controller been setup per 'VNC Console in Dashboard on Two Node Neutron GRE+OVS F20 Cluster' @Lxer.com. Procedure bellow, actually, does a backport F21 packages python-django-horizon-2104.1-1 , python-django-openstack-auth-1.1.5-1, python-pbr-0.7.0-2 via manual install of corresponding SRC.RPMs and invoking rpmbuild utility to produce F20 packages. The hard thing to know is which packages to backport ?

Basic troubleshooting AIO RDO Havana Instance on CentOS 6.5

This Howto is supposed to show up some internals which usually are not known by people just starting with AIO (--allinone) Havana setup on CentOS 6.5

RDO Havana Neutron&GRE L3 Layer Troubleshooting

This post follows up http://lxer.com/module/newswire/view/200975/index.html Qrouters namespaces external and internal interfaces are mapped to `ovs-vsctl show` output. Qdhcps namespaces interfaces are also mapped to `ovs-vsctl show` output. Traffic control is done via tcpdump on external and internal network interfaces of Qrouter's namespace associated with Network attached via external gateway to router with interface for private sub net having Ubuntu Trusty Server instance running `apt-get install` commands.

« Previous ( 1 ... 9 10 11 12 13 14 15 16 17 18 19 ... 27 ) Next »