Perform HA overcloud deployment via TripleO QuickStart Master (Pike) branch via `bash quickstart.sh -R master --config config/general_config/pacemaker.yml --nodes config/nodes/3ctlr_2comp.yml $VIRTHOST` to avoid waiting for commit's appearance in Delorean RDO trunk for Master.
This test is inspired by recent message "[rdo-list] OVS 2.7 on rdo", however has been done on stable Ocata branch 15.0.6 (versus master in link mentioned above.) So, it allows after OVS upgrade launch completely functional VM in overcloud. Obviously no ovn* packages got installed. Same step on Master is awaiting complete functionality of TripleO QS deployment of Master branch
Finally I have decided to verify two Node ML2&OVS&VXLAN deployment via "current-passed-ci" Ocata Delorean trunks . To build external , private networks and neutron router. Launch F24 cloud VM (nested KVM on Compute node). I have to notice that answer file is not quite the same as in Newton times
Post following bellow is just proof of concept that packstack might be used for RDO Ocata deployment using the same current-passed-ci delorean Ocata Trunk as it does on regular basis TripleO QuickStart. Looks like packstack mostly serves as part of the testing pipeline that promotes trunk repositories to "current-passed-ci".
This post is immediate followup to previous one "Adding Second VNIC to running Fedora 24 && CentOS 7.3 Cloud VMs on RDO Ocata". I was forced to use Nova&&Neutron CLI due to python-openstackclient doesn't seem to be ready replace required CLI commands at the moment. For instance `nova interface-attach`,`nova interface-list`
Posting bellow addresses one of currently pending questions at ask.openstack.org regarding adding second vnic to currently running cloud guest which private IP and VNIC associated with neutron port are located on different private subnet. The one which was selected booting this guest originally.
In general,Ocata overcloud deployment is more memory consuming then Newton. Minimal memory requirements highlighted bellow. Also minor trobleshooting step was undertaken several times right after overcloud deployment. Command `pcs resource cleanup` was issued after detecting resources failed to start after original deployment completed. The problem above would be gone in case when VIRTHOST (48GB) would allow to allocate 8192 MB for each PCS Cluster's Controller. Sshuttle command line was also modified to provide access to control plane and external network from workstation at a time.
Posting bellow is immediate follow up for recent "TripleO QuickStart Deployment with feature sets (topology) and nodes configuration separated" Just include control plane 192.168.24.0/24 in sshuttle command line providing connection to external network. I also presume that TripleO QS Deployment has been commited on VIRTHOST ( like described in link mentioned above )
Quoting currently posted release notes :- Configuration files in general_config were separated into feature sets (to be specified with --config argument ) and nodes configuration (to be specified with --nodes configuration)
Following bellow post attempts TripleO QuickStart deployment based on current pre-built images ( as of 02/10/2017 ) and creating required openstack environment utilizing python-openstackclient CLI instead on nova&&neutron CLI for creating networks,routers,flavors,keypairs and security rules set up in overcloud.
The most recent commits in https://github.com/openstack/tripleo-quickstart allow to use Fedora 25 Workstaion (32 GB) as target VIRTHOST for TripleO Quickstart HA Deployments and benefit from QEMU's (2.7.0) && Libvirt's (2.2.0) the most recent KVM virtualization features coming with last Fedora release.
TripleO QuickStart functionality and recent commit Merge "move the undercloud deploy role to quickstart-extras for composability".
Following bellow is instack-virt-setup deployment creating rout-able control plane via modified ~stack/undercloud.conf setting 192.168.24.0/24 to serve this purpose. It also also utilizes RDO Newton "current-passed-ci" trunk and corresponding TripleO QuickStart pre-built images, which are in sync with trunk as soon as appear to be built during CI.
In posting below I intend to test packstack on RDO Newton to perform classic three node deployment. If packstack will succeed then post installation actions like VRRP or DVR setups might be committed as well. Please,be advised that packstack on RDO Newton won't allow you to split Controller and Storage Nodes ( vs. Mitaka )
Initiate TripleO QuickStart deployment , log into undercloud and pick up files /etc/yum.repos.d/delorean.repo, /etc/yum.repos.d/delorean-deps.repo, which are associated with RDO Newton trunk with status "current-passed-ci" at the moment. On VIRTHOST and INSTACK VM set same delorean repos.
Posting below is addressing multiple questions raising up at ask.openstack.org regarding packstack AIO or Multi Node RDO Newton installations on CentOS 7.2. Instructions bellow explain how to use RDO Newton trunk "current-passed-ci" for packstack installation.
Overcloud deployment Ceph nodes via instack-virt-setup requires some additional repos to be installed on VIRTHOST (32 GB) and on INSTACK VM as well as exporting DIB_YUM_REPO_CONF referencing delorean trunks Newton "current-passed-ci" and CentOS-Ceph-Jewel.repo in INSTACK stack's shell before building overcloud images prior to overcloud deployment. I am aware of official trpipleo quickstart release 1.0.0 for RDO Newton . . .
Create new VM centos72vm on VIRTHOST with one VNIC . Set up VNIC via attaching to "overcloud" libvirt sub-net (bridge brovc). No attaching to external 192.168.23.0/24 (br-ext ) sub-net is supposed to be made as was required with non-rout-able 192.0.2.0/24.
Hacking standard tripleo-heat-template bellow was performed to avoid hitting "OSDs fewer than Replicas" cluster status.Corresponding template got discovered and updated before running overcloud deployment.
Following is a sample of overcloud deployment 1xController + 2xCompute +3 Ceph Nodes (Cluster) via TripleO QuickStart . Number of Ceph Nodes is supposed to be 3 at least due to automatically generated file /etc/ceph/ceph.conf contains entry "osd_pool_default_size = 3" to avoid hitting "Fewer OSDs than Replicas" cluster's status.