In general,Ocata overcloud deployment is more memory consuming then Newton. Minimal memory requirements highlighted bellow. Also minor trobleshooting step was undertaken several times right after overcloud deployment. Command `pcs resource cleanup` was issued after detecting resources failed to start after original deployment completed. The problem above would be gone in case when VIRTHOST (48GB) would allow to allocate 8192 MB for each PCS Cluster's Controller. Sshuttle command line was also modified to provide access to control plane and external network from workstation at a time.
Posting bellow is immediate follow up for recent "TripleO QuickStart Deployment with feature sets (topology) and nodes configuration separated" Just include control plane 192.168.24.0/24 in sshuttle command line providing connection to external network. I also presume that TripleO QS Deployment has been commited on VIRTHOST ( like described in link mentioned above )
Quoting currently posted release notes :- Configuration files in general_config were separated into feature sets (to be specified with --config argument ) and nodes configuration (to be specified with --nodes configuration)
Following bellow post attempts TripleO QuickStart deployment based on current pre-built images ( as of 02/10/2017 ) and creating required openstack environment utilizing python-openstackclient CLI instead on nova&&neutron CLI for creating networks,routers,flavors,keypairs and security rules set up in overcloud.
The most recent commits in https://github.com/openstack/tripleo-quickstart allow to use Fedora 25 Workstaion (32 GB) as target VIRTHOST for TripleO Quickstart HA Deployments and benefit from QEMU's (2.7.0) && Libvirt's (2.2.0) the most recent KVM virtualization features coming with last Fedora release.
TripleO QuickStart functionality and recent commit Merge "move the undercloud deploy role to quickstart-extras for composability".
Following bellow is instack-virt-setup deployment creating rout-able control plane via modified ~stack/undercloud.conf setting 192.168.24.0/24 to serve this purpose. It also also utilizes RDO Newton "current-passed-ci" trunk and corresponding TripleO QuickStart pre-built images, which are in sync with trunk as soon as appear to be built during CI.
In posting below I intend to test packstack on RDO Newton to perform classic three node deployment. If packstack will succeed then post installation actions like VRRP or DVR setups might be committed as well. Please,be advised that packstack on RDO Newton won't allow you to split Controller and Storage Nodes ( vs. Mitaka )
Initiate TripleO QuickStart deployment , log into undercloud and pick up files /etc/yum.repos.d/delorean.repo, /etc/yum.repos.d/delorean-deps.repo, which are associated with RDO Newton trunk with status "current-passed-ci" at the moment. On VIRTHOST and INSTACK VM set same delorean repos.
Posting below is addressing multiple questions raising up at ask.openstack.org regarding packstack AIO or Multi Node RDO Newton installations on CentOS 7.2. Instructions bellow explain how to use RDO Newton trunk "current-passed-ci" for packstack installation.
Overcloud deployment Ceph nodes via instack-virt-setup requires some additional repos to be installed on VIRTHOST (32 GB) and on INSTACK VM as well as exporting DIB_YUM_REPO_CONF referencing delorean trunks Newton "current-passed-ci" and CentOS-Ceph-Jewel.repo in INSTACK stack's shell before building overcloud images prior to overcloud deployment. I am aware of official trpipleo quickstart release 1.0.0 for RDO Newton . . .
Create new VM centos72vm on VIRTHOST with one VNIC . Set up VNIC via attaching to "overcloud" libvirt sub-net (bridge brovc). No attaching to external 192.168.23.0/24 (br-ext ) sub-net is supposed to be made as was required with non-rout-able 192.0.2.0/24.
Hacking standard tripleo-heat-template bellow was performed to avoid hitting "OSDs fewer than Replicas" cluster status.Corresponding template got discovered and updated before running overcloud deployment.
Following is a sample of overcloud deployment 1xController + 2xCompute +3 Ceph Nodes (Cluster) via TripleO QuickStart . Number of Ceph Nodes is supposed to be 3 at least due to automatically generated file /etc/ceph/ceph.conf contains entry "osd_pool_default_size = 3" to avoid hitting "Fewer OSDs than Replicas" cluster's status.
Finally Overcloud deployment start to initialize mistral work flow in QuickStart environment. Memory allocation as 7 GB for PCS HA Controllers and 6.7 GB for each compute overcloud node (1 VCPU by default for any node running in overcloud) seems to be safe to pass phases 5.X of overcloud deployment with QuickStart.
Due to getting errors in /var/log/mistral/executor.log on regular basis attempting to deploy overcloud via TripleO QuickStart. I was just wondering how much can assist KSM&&KSMTUNED to achieve a goal to set up 5 VMs (6.7GB,1 VCPU)running in overcloud and "instack VM" undercloud with 4 VCPUs,8GB RAM and 4GB swap file on 32GB RAM total on VIRTHOST.
Posting below is supposed to demonstrate KSM implementation on QuickStart providing significant relief on 32 GB VIRTHOST vs quite the same deployment described in previous draft .
Draft belllow may be considered as POC awaiting release of TripleoO QuickStart along with flexible templates managed by ansible and KSM patching.
Following bellow is verification of status Newton DLRN consistent trunks for TripleO undercloud/overcloud deployment based on packages currently been built via upstream git branches.
Set up F24 as WKS for "TripleO instack-virt-setup overcloud/undercloud deployment to VIRTHOST" via ssh (trusted) connection . This setup works much more stable then configuring FoxyProxy on VIRTHOST running "instack" ( actually undercloud VM) hosting heat stack "overcloud" and several overcloud Controllers and Compute VMs