Right after setting up Ubuntu Focal Fossa instance on bare metal run following commands to setup bridge br1 linked to physical interface enp2s0 which was used as normal connection to office LAN during install . . .
Sequence of steps and bridge network configuration on native Debian Buster 10.3 host seemed to me a bit different from manuals which are available in meantime on the Net. Specifically I've undertaken some additional steps to fix error with Radeon kernel modesetting enabling also configuration bridge to physical LAN is supposed to be done in the way different from how it works on LXDE 4.
We would start via installs proposed for Ubuntu Focal Fossa and create file /etc/network/interfaces exactly as recommended in official Debian Buster manual to be able launch guests available over all office LAN.
We intend to testify F32 KVM Hypervisor on Penryn's box with 8 GB RAM. However, after install KVM on F32 WKS we appear to be able to launch Manjaro Gnome 19.02 as KVM guest @F32 Workstation Virtualization Host as well as CentOS 8.1 Guest with no issues at all.
Performance appraisal has been done via attempt to set up Ubuntu 20.04 as KVM virtualization host installing Qemu && Libvirt. Please, be aware that successful install KVM on Ubuntu 20.04 (bare metal) might require in meantime CPU's generation Haswell at least.
Performance appraisal has been done via attempt to set up Manjaro 19.0 Guest as KVM virtualization host installing Qemu && Libvirt via native command `pacman -S`.
In meantime the only option brought me to success installing telegram-desktop on CentOS 8.1 appears to be snap.
In meantime Snapd daemon might be installed on Linux Mint with no issues. So Gnome 3 Desktop installation is supposed to be done via "sudo snap installs" as quite straight forward procedure.
If you’re curious about Tor browser, then you already know how important your privacy and anonymity online can be. And yes, Tor browser is a great tool that can help keep you safe. But there’s a lot of confusion about its pros and cons, and especially, about how it relates to VPNs
At the moment URLs below worked for me as installation repositories, Hopefully this repositories will stay stable in a foreseeable future. Particular source might be verified when creating CentOS 8 virtual machine on any KVM Hypervisor, say on Ubuntu 18.05 Server .
Following bellow is a sample of tuning deploy-configHA.yaml which allows to obtain fairly functional overcloud topology with PCS HA Controller's cluster, then create Networks, HA Router and launch F27 VM. Install on VM links text browser and surf the Net pretty fast and smoothly. Sizes of PCS Controller's were tuned rather then default size of undercloud as well as number VCPUS running undercloud VM.
This is an immediate follow up for http://lxer.com/module/newswire/view/251608/index.html Syntax of deploy-config.yaml template was changed to handle three node PCS Controllers cluster deployment. Containerized Deployment finishes up with 1.2 GB in Swap area. So it's just a POC done on 32 GB VIRTHOST. At the time of writing you need UNDERCLOUD 12 GB and 6 VCPUS at least,e.g. 48(64) GB RAM on VIRTHOST. Each overcloud node should have 8 GB RAM and 2 VCPUS.
In general, we follow "New TripleO quickstart cheatsheet" by Carlo Camacho However , syntax of deploy-config.yaml template was slightly changed to handle several compute nodes deployment. Also standard configuration connecting Fedora WorkStation 27 to VIRTHOST (32 GB) via stack account has been used as usual. Containerized deployment Overcloud with One Controller and Two Compute nodes has been done and tested via external connections (sshuttle supported) from fedora 27 workstation.
Perform HA overcloud deployment via TripleO QuickStart Master (Pike) branch via `bash quickstart.sh -R master --config config/general_config/pacemaker.yml --nodes config/nodes/3ctlr_2comp.yml $VIRTHOST` to avoid waiting for commit's appearance in Delorean RDO trunk for Master.
This test is inspired by recent message "[rdo-list] OVS 2.7 on rdo", however has been done on stable Ocata branch 15.0.6 (versus master in link mentioned above.) So, it allows after OVS upgrade launch completely functional VM in overcloud. Obviously no ovn* packages got installed. Same step on Master is awaiting complete functionality of TripleO QS deployment of Master branch
Finally I have decided to verify two Node ML2&OVS&VXLAN deployment via "current-passed-ci" Ocata Delorean trunks . To build external , private networks and neutron router. Launch F24 cloud VM (nested KVM on Compute node). I have to notice that answer file is not quite the same as in Newton times
Post following bellow is just proof of concept that packstack might be used for RDO Ocata deployment using the same current-passed-ci delorean Ocata Trunk as it does on regular basis TripleO QuickStart. Looks like packstack mostly serves as part of the testing pipeline that promotes trunk repositories to "current-passed-ci".
This post is immediate followup to previous one "Adding Second VNIC to running Fedora 24 && CentOS 7.3 Cloud VMs on RDO Ocata". I was forced to use Nova&&Neutron CLI due to python-openstackclient doesn't seem to be ready replace required CLI commands at the moment. For instance `nova interface-attach`,`nova interface-list`
Posting bellow addresses one of currently pending questions at ask.openstack.org regarding adding second vnic to currently running cloud guest which private IP and VNIC associated with neutron port are located on different private subnet. The one which was selected booting this guest originally.
In general,Ocata overcloud deployment is more memory consuming then Newton. Minimal memory requirements highlighted bellow. Also minor trobleshooting step was undertaken several times right after overcloud deployment. Command `pcs resource cleanup` was issued after detecting resources failed to start after original deployment completed. The problem above would be gone in case when VIRTHOST (48GB) would allow to allocate 8192 MB for each PCS Cluster's Controller. Sshuttle command line was also modified to provide access to control plane and external network from workstation at a time.