Showing headlines posted by dba477

( 1 2 3 4 5 6 ... 20 ) Next »

NetworkManager CLI and deployment KVM guests on Ubuntu 20.04

Right after setting up Ubuntu Focal Fossa instance on bare metal run following commands to setup bridge br1 linked to physical interface enp2s0 which was used as normal connection to office LAN during install . . .

Just another KVM setup on Debian Buster 10.3

Sequence of steps and bridge network configuration on native Debian Buster 10.3 host seemed to me a bit different from manuals which are available in meantime on the Net. Specifically I've undertaken some additional steps to fix error with Radeon kernel modesetting enabling also configuration bridge to physical LAN is supposed to be done in the way different from how it works on LXDE 4.

Setting up KVM on LMDE 4 in Debian Buster Style (Classic)

We would start via installs proposed for Ubuntu Focal Fossa and create file /etc/network/interfaces exactly as recommended in official Debian Buster manual to be able launch guests available over all office LAN.

Dev branch Fedora 32 versus Dev branch Ubuntu 20.04 in relation with KVM Hypervisor

We intend to testify F32 KVM Hypervisor on Penryn's box with 8 GB RAM. However, after install KVM on F32 WKS we appear to be able to launch Manjaro Gnome 19.02 as KVM guest @F32 Workstation Virtualization Host as well as CentOS 8.1 Guest with no issues at all.

Nested KVM performance evaluation on Ubuntu 20.04 (Focal Fossa)

Performance appraisal has been done via attempt to set up Ubuntu 20.04 as KVM virtualization host installing Qemu && Libvirt. Please, be aware that successful install KVM on Ubuntu 20.04 (bare metal) might require in meantime CPU's generation Haswell at least.

Nested KVM performance evaluation on Linux Manjaro 19.0 Guest on Virthost CentOS 8.1

Performance appraisal has been done via attempt to set up Manjaro 19.0 Guest as KVM virtualization host installing Qemu && Libvirt via native command `pacman -S`.

The only option brought me to success installing telegram-desktop on CentOS 8.1 appears to be snap

  • https://dbaxps.blogspot.com; By Boris Derzhavets (Posted by dba477 on Feb 19, 2020 2:34 PM EDT)
  • Story Type: Security, Tutorial; Groups: Red Hat
In meantime the only option brought me to success installing telegram-desktop on CentOS 8.1 appears to be snap.

Install Gnome 3.28 on top of Linux Mint 19.3 via snapd daemon

  • https://dbaxps.blogspot.com; By Boris Derzhavets (Posted by dba477 on Feb 6, 2020 7:12 AM EDT)
  • Story Type: Tutorial; Groups: GNOME, Ubuntu
In meantime Snapd daemon might be installed on Linux Mint with no issues. So Gnome 3 Desktop installation is supposed to be done via "sudo snap installs" as quite straight forward procedure.

The Ultimate Guide to Tor Browser (with Important Tips) 2020

If you’re curious about Tor browser, then you already know how important your privacy and anonymity online can be. And yes, Tor browser is a great tool that can help keep you safe. But there’s a lot of confusion about its pros and cons, and especially, about how it relates to VPNs

CentOS 8.0.1905 network installs on bare metal via External Repos succeeded (non US location )

  • https://dbaxps.blogspot.com; By Boris Derzhavets (Posted by dba477 on Sep 29, 2019 10:05 AM EDT)
  • Story Type: Tutorial; Groups: Red Hat
At the moment URLs below worked for me as installation repositories, Hopefully this repositories will stay stable in a foreseeable future. Particular source might be verified when creating CentOS 8 virtual machine on any KVM Hypervisor, say on Ubuntu 18.05 Server .

Attempt of evaluation TripleO QS (Master) overcloud containerized HA deployment on 32 GB VIRTHOST

Following bellow is a sample of tuning deploy-configHA.yaml which allows to obtain fairly functional overcloud topology with PCS HA Controller's cluster, then create Networks, HA Router and launch F27 VM. Install on VM links text browser and surf the Net pretty fast and smoothly. Sizes of PCS Controller's were tuned rather then default size of undercloud as well as number VCPUS running undercloud VM.

TripleO QuickStart (Master) overcloud containerized HA deployment

This is an immediate follow up for http://lxer.com/module/newswire/view/251608/index.html Syntax of deploy-config.yaml template was changed to handle three node PCS Controllers cluster deployment. Containerized Deployment finishes up with 1.2 GB in Swap area. So it's just a POC done on 32 GB VIRTHOST. At the time of writing you need UNDERCLOUD 12 GB and 6 VCPUS at least,e.g. 48(64) GB RAM on VIRTHOST. Each overcloud node should have 8 GB RAM and 2 VCPUS.

TripleO QuickStart (Master) overcloud containerized deployment with several compute nodes.

In general, we follow "New TripleO quickstart cheatsheet" by Carlo Camacho However , syntax of deploy-config.yaml template was slightly changed to handle several compute nodes deployment. Also standard configuration connecting Fedora WorkStation 27 to VIRTHOST (32 GB) via stack account has been used as usual. Containerized deployment Overcloud with One Controller and Two Compute nodes has been done and tested via external connections (sshuttle supported) from fedora 27 workstation.

Attempt to verify patch for 'metadata service PicklingError' on TripleO QS Master (Pike) via HA overcloud deployment

Perform HA overcloud deployment via TripleO QuickStart Master (Pike) branch via `bash quickstart.sh -R master --config config/general_config/pacemaker.yml --nodes config/nodes/3ctlr_2comp.yml $VIRTHOST` to avoid waiting for commit's appearance in Delorean RDO trunk for Master.

Attempt to upgrade OVS to 2.7 on HA overcloud topology RDO Ocata

This test is inspired by recent message "[rdo-list] OVS 2.7 on rdo", however has been done on stable Ocata branch 15.0.6 (versus master in link mentioned above.) So, it allows after OVS upgrade launch completely functional VM in overcloud. Obviously no ovn* packages got installed. Same step on Master is awaiting complete functionality of TripleO QS deployment of Master branch

RDO Ocata packstack Multi Node deployment via the most recent tested Ocata trunk

Finally I have decided to verify two Node ML2&OVS&VXLAN deployment via "current-passed-ci" Ocata Delorean trunks . To build external , private networks and neutron router. Launch F24 cloud VM (nested KVM on Compute node). I have to notice that answer file is not quite the same as in Newton times

RDO Ocata packstack AIO deployment via the most recent tested trunk

Post following bellow is just proof of concept that packstack might be used for RDO Ocata deployment using the same current-passed-ci delorean Ocata Trunk as it does on regular basis TripleO QuickStart. Looks like packstack mostly serves as part of the testing pipeline that promotes trunk repositories to "current-passed-ci".

Switching to newly created project's private network running Fedora 24 && CentOS 7.3 Cloud VMs on RDO Ocata

This post is immediate followup to previous one "Adding Second VNIC to running Fedora 24 && CentOS 7.3 Cloud VMs on RDO Ocata". I was forced to use Nova&&Neutron CLI due to python-openstackclient doesn't seem to be ready replace required CLI commands at the moment. For instance `nova interface-attach`,`nova interface-list`

Adding Second VNIC to running in overcloud Fedora 24 VM on RDO Ocata

Posting bellow addresses one of currently pending questions at ask.openstack.org regarding adding second vnic to currently running cloud guest which private IP and VNIC associated with neutron port are located on different private subnet. The one which was selected booting this guest originally.

TripleO QuickStart Ocata branch Deployment with feature sets and nodes configuration (topology) seperated

In general,Ocata overcloud deployment is more memory consuming then Newton. Minimal memory requirements highlighted bellow. Also minor trobleshooting step was undertaken several times right after overcloud deployment. Command `pcs resource cleanup` was issued after detecting resources failed to start after original deployment completed. The problem above would be gone in case when VIRTHOST (48GB) would allow to allocate 8192 MB for each PCS Cluster's Controller. Sshuttle command line was also modified to provide access to control plane and external network from workstation at a time.

( 1 2 3 4 5 6 ... 20 ) Next »