Showing headlines posted by dba477

« Previous ( 1 ... 7 8 9 10 11 12 13 14 15 16 17 ... 27 ) Next »

DVR set up on RDO Liberty with separated Controller && Network Nodes

Actually, setup down here was carefully tested in regards of Mitaka Milestone 1 which hopefully will allow to verify solution provided by Bug #1365473 Unable to create a router that's both HA and distributed. Delorean repos now are supposed to be rebuilt and ready for testing via RDO deployment in a week after each Mitaka Milestone

Storage Node (LVMiSCSI) deployment for RDO Liberty on CentOS 7.1

Posting bellow via straightforward RDO Liberty deployment demonstrates that Storage Node might work as traditional iSCSI Target Server and each Compute Node is actually iSCSI initiator client. This functionality is provided by tuning Cinder && Glance Services running on Storage Node.

VRRP four nodes setup on RDO Liberty (CentOS 7.1)

Sample bellow (sic) demonstrates uninterrupted access, providing via HA Neutron router, to cloud VMs running on Compute node, when two installed Network Nodes node are swapping MASTER and BACKUP roles (as members of keepalived pair).

RDO Liberty Set up for three Nodes Controller&Network&Compute (ML2&OVS&VXLAN) on CentOS 7.1

As advertised officially, in addition to the comprehensive OpenStack services, libraries and clients, this release also provides Packstack, a simple installer for proof-of-concept installations, as small as a single all-in-one box and RDO Manager an OpenStack deployment and management tool for production environments based on the OpenStack TripleO project.

RDO Liberty (RC2) DVR Neutron workflow on CentOS 7.1

This post is focused on verification of classic schema of Neutron workflow between qrouter and fip namespaces on Compute Nodes via distributed routers. Release of Openstack been tested was RDO Liberty RC2

Multiple external networks with a single L3 agent testing on RDO Liberty per Lars Kellogg-Stedman

Following bellow is supposed to test in multi node environment, Multiple external networks with a single L3 agent by Lars Kellogg-Stedman. However, current post contains an attempt to analyze and understand how traffic to/from external network flows through br-int when provider external networks has been involved

Can neutron-metadata-proxy co-exist in qrouter and qdhcp namespace at the same time ?

Posting is addressing question been asked at ask.openstack.org. Question : Can meta-data co-exist in qrouter and qdhcp namespace at the same time so that LANs without Routers involved can access meta-data ?

RDO Kilo DVR Deployment (Controller/Network)+Compute+Compute on CentOS 7.1

Why schema suggested by Benjamin Schmaus for RDO Juno in http://schmaustech.blogspot.com/2014/12/configuring-dvr-in-o... generates several errors on Liberty, which may be tracked down in reasonable time and finally brings to success, just silently fails on RDO Kilo taking away VXLAN tunnels forever. Posting bellow is supposed to address this issue.

RDO Liberty DVR Deployment (Controller/Network)+Compute+Compute (ML2&OVS&VXLAN) on CentOS 7.1

Neutron DVR implements so called fip-namespace on every Compute Node where the VMs are running. So that VMs with FloatingIPs can forward the traffic to the External Network avoiding sending and receiving external data through VXLAN or GRE tunnels connecting them to Network Node (North-South Routing), wich is supposed to be a single point of failure for L3 routing in traditional configuration.

RDO Liberty (beta) Set up for three VM Nodes (Controller+Network+Compute) on CentOS 7.1

RDO Liberty (beta) passed 3 node deployment test : Controller+Network+Compute Configuration ML2&OVS&VXLAN. Regardless RH is mainly focused on RDO-Manager based Liberty deployments. I don't have desktop been able to run 6 VMs at a time, because I truly believe that it requires 8 CORE Intel CPU like Intel® Xeon® Processor E5-2690. On 4 CORE CPU like i7 it's just impossible . . .

RDO Juno DVR Deployment (Controller/Network)&Compute&Compute on CentOS 7.1

Neutron DVR implements the fip-namespace on every Compute Node where the VMs are running. Thus VMs with FloatingIPs can forward the traffic to the External Network without routing it via Network Node.It also implements the L3 Routers across the Compute Nodes, so that tenants intra VM communication will occur without Network Node involvment.Neutron DVR provides the legacy SNAT behavior . . .

Once again RDO Kilo Set up for 3 Fedora 22 Nodes Controller+Network+Compute (ML2&OVS&VXLAN)

After upgrade to upstream version of openstack-puppet-modules-2015.1.9 procedure of RDO Kilo install on F22 significantly changed. Steps required to be undertaken on Controller follow below.

CPU Pinning and NUMA Topology on RDO Kilo on Fedora Server 22

Posting bellow follows up http://redhatstackblog.redhat.com/2015/05/05/cpu-pinning-and... on RDO Kilo installed on Fedora 22 . After upgrade to upstream version of openstack-puppet-modules-2015.1.9 procedure of RDO Kilo install on F22 significantly changed. Details follow bellow

CPU Pinning and NUMA Topology on RDO Kilo upgraded via qemu-kvm-ev-2.1.2 on CentOS 7.1

Recent build on CentOS 7.X qemu-kvm-ev-2.1.2-23.el7.1 enables CPU Pinning and NUMA Topology for RDO Kilo on CentOS 7.1. Qemu-kvm upgrade is supposed to be done as post installation procedure.Final target is to perform launching nova instance with CPU pinning on on i7 4790 Haswell CPU box.

Setup Nova-Docker Driver with RDO Kilo on F22

Current post describes in details a sequence of steps which allow to work with Nova-Docker driver been built based on top commit of master nova-docker branch, stable/kilo branch is not supposed to be checked out before running `python setup.py install`. This results several additional efforts . . .

Setup Nova-Docker Driver with RDO Kilo on Fedora 22

Hackery bellow was tested multiple times for AIO installs via packstack, providing completly functional Neutron Services and allows to create neutron routers, tenant's and external networks on single box or virtual machine.

Switching to Dashboard Spice Console in RDO Kilo on Fedora 22

The post follows up http://lxer.com/module/newswire/view/214893/index.html The most recent `yum update` on F22 significantly improved network performance on cloud VMs (L2) . Watching movies running on cloud F22 VM (with "Mate Desktop" been installed and functioning pretty smoothly) without sound refreshes spice memories

RDO Kilo Set up for three F22 VM Nodes Controller&Network&Compute (ML2&OVS&VXLAN)

Actually, straight forward install RDO Kilo on Fedora 22 crashes due to relatively simple puppet mistake. Workaround for this issue was recently suggested by Javier Pena. Start packstack for multinode deployment as normal to get files require updates. After first packstack crash update /usr/share/ruby/vendor_ruby/puppet/provider/service/systemd.rb to include "F22" (in quite obvious place) on all deployment nodes

How VMs access metadata via qrouter-namespace in Openstack Kilo

It is actually an update for Neutron on Kilo of original blog entry http://techbackground.blogspot.ie/2013/06/metadata-via-quant... considering Quantum implementation on Grizzly. From my standpoint understanding of core architecture of Neutron openstack flow in regards of nova-api metadata service access (and getting proper response from nova-api service) by VMs launching via nova causes a lot of problems due to leak of understanding of core concepts.

Setup Nova-Docker Driver with RDO Kilo on Fedora 21

Set up RDO Kilo on Fedora 21 per https://www.rdoproject.org/Quickstart Next step is upgrade several python packages via Fedora Rawhide, build Nova-Docker Driver and switch openstack-nova-compute to run Nova-Docker Driver been built via stable/kilo branch of http://github.com/stackforge/nova-docker.git

« Previous ( 1 ... 7 8 9 10 11 12 13 14 15 16 17 ... 27 ) Next »