Showing headlines posted by dba477

( 1 2 3 4 5 6 ... 19 ) Next »

RDO Ocata packstack Multi Node deployment via the most recent tested Ocata trunk

Finally I have decided to verify two Node ML2&OVS&VXLAN deployment via "current-passed-ci" Ocata Delorean trunks . To build external , private networks and neutron router. Launch F24 cloud VM (nested KVM on Compute node). I have to notice that answer file is not quite the same as in Newton times

RDO Ocata packstack AIO deployment via the most recent tested trunk

Post following bellow is just proof of concept that packstack might be used for RDO Ocata deployment using the same current-passed-ci delorean Ocata Trunk as it does on regular basis TripleO QuickStart. Looks like packstack mostly serves as part of the testing pipeline that promotes trunk repositories to "current-passed-ci".

Switching to newly created project's private network running Fedora 24 && CentOS 7.3 Cloud VMs on RDO Ocata

This post is immediate followup to previous one "Adding Second VNIC to running Fedora 24 && CentOS 7.3 Cloud VMs on RDO Ocata". I was forced to use Nova&&Neutron CLI due to python-openstackclient doesn't seem to be ready replace required CLI commands at the moment. For instance `nova interface-attach`,`nova interface-list`

Adding Second VNIC to running in overcloud Fedora 24 VM on RDO Ocata

Posting bellow addresses one of currently pending questions at ask.openstack.org regarding adding second vnic to currently running cloud guest which private IP and VNIC associated with neutron port are located on different private subnet. The one which was selected booting this guest originally.

TripleO QuickStart Ocata branch Deployment with feature sets and nodes configuration (topology) seperated

In general,Ocata overcloud deployment is more memory consuming then Newton. Minimal memory requirements highlighted bellow. Also minor trobleshooting step was undertaken several times right after overcloud deployment. Command `pcs resource cleanup` was issued after detecting resources failed to start after original deployment completed. The problem above would be gone in case when VIRTHOST (48GB) would allow to allocate 8192 MB for each PCS Cluster's Controller. Sshuttle command line was also modified to provide access to control plane and external network from workstation at a time.

Remote ssh connection via sshuttle to Overcloud TripleO QuickStart VMs (Master branch)

Posting bellow is immediate follow up for recent "TripleO QuickStart Deployment with feature sets (topology) and nodes configuration separated" Just include control plane 192.168.24.0/24 in sshuttle command line providing connection to external network. I also presume that TripleO QS Deployment has been commited on VIRTHOST ( like described in link mentioned above )

TripleO QuickStart Deployment with feature sets (topology) and and nodes configuration separated

Quoting currently posted release notes :- Configuration files in general_config were separated into feature sets (to be specified with --config argument ) and nodes configuration (to be specified with --nodes configuration)

TripleO QuickStart HA deployment of Master branch && python-openstackclient CLI ( vs Neutron CLI )

Following bellow post attempts TripleO QuickStart deployment based on current pre-built images ( as of 02/10/2017 ) and creating required openstack environment utilizing python-openstackclient CLI instead on nova&&neutron CLI for creating networks,routers,flavors,keypairs and security rules set up in overcloud.

TripleO QuickStart HA&&CEPH Deployment on Fedora 25 VIRTHOST

The most recent commits in https://github.com/openstack/tripleo-quickstart allow to use Fedora 25 Workstaion (32 GB) as target VIRTHOST for TripleO Quickstart HA Deployments and benefit from QEMU's (2.7.0) && Libvirt's (2.2.0) the most recent KVM virtualization features coming with last Fedora release.

TripleO QuickStart functionality and recent commit move the undercloud deploy role to quickstart-extras for composability

TripleO QuickStart functionality and recent commit Merge "move the undercloud deploy role to quickstart-extras for composability".

RDO Newton Instack-virt-setup deployment with routable control plane on CentOS 7.3

Following bellow is instack-virt-setup deployment creating rout-able control plane via modified ~stack/undercloud.conf setting 192.168.24.0/24 to serve this purpose. It also also utilizes RDO Newton "current-passed-ci" trunk and corresponding TripleO QuickStart pre-built images, which are in sync with trunk as soon as appear to be built during CI.

RDO Newton Set up for three Nodes on CentOS 7.3

In posting below I intend to test packstack on RDO Newton to perform classic three node deployment. If packstack will succeed then post installation actions like VRRP or DVR setups might be committed as well. Please,be advised that packstack on RDO Newton won't allow you to split Controller and Storage Nodes ( vs. Mitaka )

Instack-virt-setup deployment via TripleO Quickstart prebuilt images on CentOS 7.2

Initiate TripleO QuickStart deployment , log into undercloud and pick up files /etc/yum.repos.d/delorean.repo, /etc/yum.repos.d/delorean-deps.repo, which are associated with RDO Newton trunk with status "current-passed-ci" at the moment. On VIRTHOST and INSTACK VM set same delorean repos.

Packstack install RDO Newton with Keystone API V2 on CentOS 7.2

Posting below is addressing multiple questions raising up at ask.openstack.org regarding packstack AIO or Multi Node RDO Newton installations on CentOS 7.2. Instructions bellow explain how to use RDO Newton trunk "current-passed-ci" for packstack installation.

RDO Newton Overcloud HA&&CEPH deployment via instack-virt-setup on CentOS 7.2 VIRTHOST

Overcloud deployment Ceph nodes via instack-virt-setup requires some additional repos to be installed on VIRTHOST (32 GB) and on INSTACK VM as well as exporting DIB_YUM_REPO_CONF referencing delorean trunks Newton "current-passed-ci" and CentOS-Ceph-Jewel.repo in INSTACK stack's shell before building overcloud images prior to overcloud deployment. I am aware of official trpipleo quickstart release 1.0.0 for RDO Newton . . .

Verification switching to routable ctlplane 192.168.24.0/24 on TripleO Quickstart

Create new VM centos72vm on VIRTHOST with one VNIC . Set up VNIC via attaching to "overcloud" libvirt sub-net (bridge brovc). No attaching to external 192.168.23.0/24 (br-ext ) sub-net is supposed to be made as was required with non-rout-able 192.0.2.0/24.

Hackery Overcloud HA&&Ceph Deployment RDO Newton via TripleO QuickStart

Hacking standard tripleo-heat-template bellow was performed to avoid hitting "OSDs fewer than Replicas" cluster status.Corresponding template got discovered and updated before running overcloud deployment.

RDO Newton Overcloud deployment utlizing Ceph Cluster as Glance && Cinder backend via TripleO QuickStart

Following is a sample of overcloud deployment 1xController + 2xCompute +3 Ceph Nodes (Cluster) via TripleO QuickStart . Number of Ceph Nodes is supposed to be 3 at least due to automatically generated file /etc/ceph/ceph.conf contains entry "osd_pool_default_size = 3" to avoid hitting "Fewer OSDs than Replicas" cluster's status.

RDO Newton Overcloud HA deployment via TripleO QuickStart

Finally Overcloud deployment start to initialize mistral work flow in QuickStart environment. Memory allocation as 7 GB for PCS HA Controllers and 6.7 GB for each compute overcloud node (1 VCPU by default for any node running in overcloud) seems to be safe to pass phases 5.X of overcloud deployment with QuickStart.

KSM as instack-virt-setup accelerator HA Overcloud Deployment (RDO Newton) with two Compute Nodes

Due to getting errors in /var/log/mistral/executor.log on regular basis attempting to deploy overcloud via TripleO QuickStart. I was just wondering how much can assist KSM&&KSMTUNED to achieve a goal to set up 5 VMs (6.7GB,1 VCPU)running in overcloud and "instack VM" undercloud with 4 VCPUs,8GB RAM and 4GB swap file on 32GB RAM total on VIRTHOST.

( 1 2 3 4 5 6 ... 19 ) Next »