Showing headlines posted by dba477
« Previous ( 1 ... 13 14 15 16 17 18 19 20 21 22 23 ... 29 ) Next »Up to date procedure of creating cinder's ThinLVM based cloud instances on F20 Havana Compute Node
Schema bellow allows to avoid two commands . First one creating cinder volume and second one booting instance via volume created . It allows to create volume on F20 Controller node and boot instance via created volume on F20 Compute node simultaneously.
USB Redirection hack
I clearly understand that only incomplete Havana RDO setup allows me to activate spice USB redirection communicating with cloud instances.
There is no dashboard ( Administrative Web Console ) on Cluster. All information
regarding nova instances status, neutron subnets,routers,ports is supposed to be obtained via CLI as well as managing instances, subnets,routers,ports and rules is also supposed to be done via CLI, having carefully watch sourcing "keystonerc_user" file to manage in environment of particular user of particular tenant.
Ongoing testing
Following bellow was accepted as correct answer on 02/18/2014 at https://ask.openstack.org/en/question/9365/trouble-installin... Openstack Nova Software evaluated that my Compute Node may run at a time no more then 5 cloud instances. So , to create new one I just must have in `nova list` no more then four entries. Then I will be able create new one instance for sure. It has been tested on two "Two Node Neutron GRE+OVS Systems" ( watching in other window `tail -f /var/log/nova/scheduler.log` for ERRORS to connect AMQP Server) . It looks like AMQP Server eventually breaks connection to Nova & Neutron-server services, what may require service QPIDD restart and service openstack-nova-scheduler restart.
Setup Light Weight X Windows environment on Fedora 20 Cloud instance
Following bellow (sic) builds Light Weight X Windows environment on Fedora 20 Cloud instance and demonstrate (sic) running same instance in Spice session via virt-manager ( Controller connects to Compute node via virt-manager ).
Surfing Internet & SSH connectoin on (to) cloud instance of Fedora 20 or Ubuntu 13.10 via Neutron GRE
When you meet the first time with GRE tunnelling you have to understand that GRE encapsulation requires 24 bytes and a lot of problems raise up , view http://www.cisco.com/en/US/tech/tk827/tk369/technologies_tec... In particular, Two Node (Controller+Compute) RDO Havana cluster on Fedora 20 hosts been built by myself per guidelines from http://kashyapc.wordpress.com/2013/11/23/neutron-configs-for... was Neutron GRE cluster. Hence, for any instance has been setup (Fedora or Ubuntu) problem with network communication raises up immediately. Apt-get update just refuse to work on Ubuntu Salamander Server instance (default MTU value for Ethernet interface is 1500).
"Setting up Two Physical-Node OpenStack RDO Havana + Neutron GRE" on Fedora 20 boxes
Cloud instances running on Compute perform commands like nslookup,traceroute. Yum install & yum -y update work on Fedora 19 instance, however, in meantime time network on VF19 is stable, but relatively slow.CentOS 6.5 with "RDO Havana+Glusterfs+Neutron VLAN" works on same box (dual booting with F20) much faster.That is a first impression
Getting Started with Multi-Node OpenStack RDO Havana + Gluster Backend + Neutron VLAN per Andrew Lau
Why CentOS 6.5 ? It has library ligfapi http://www.gluster.org/2012/11/integration-with-kvmqemu/ backported what allows native Qemu work directly with gluterfs 3.4.1 volumes.
oVirt 3.3.2 hackery on Fedora 19
My final target was to create two node oVirt 3.3.2 cluster and virtual machines using replicated glusterfs 3.4.1 volumes based on XFS formatted partitions. Choice of IPv4 firewall with iptables for tuning cluster environment
and synchronization is my personal preference. Now I also know that postgres requires enough shared memory allocation like Informix or Oracle
Setting up Multi-Node OpenStack RDO Havana + Gluster Backend + Neutron VLAN
OpenStack is probably the largest and fastest growing opensource project out there. Unfortunately, that means fast changes, new features and different install methods. RedHat's RDO simplifies this.
oVirt 3.3 hackery on Fedora 19
My final target was to create two node oVirt 3.3 cluster and virtual machines using replicated glusterfs 3.4.1 volumes. Choice of firewalld as configured firewall seems to be unacceptable for this purpose in meantime. Selection of iptables firewall allows to complete the task.
Glusterfs replicated volume based Havana 2013.2 instances on Server With GlusterFS 3.4.1 Fedora 19 in two node cluster
Two node gluster 3.4.1 cluster set up follows bellow. Please, view first nice article: http://www.mirantis.com/blog/openstack-havana-glusterfs-and-... and http://wiki.openstack.org/wiki/CinderSupportMatrix.
Glusterfs's volume based Havana RC1 instances on NFS-Like Standalone Storage Server With GlusterFS 3.4.1 Fedora 19
Following http://www.gluster.org/category/openstack/, this is a snapshot to show the difference between the Havanna and Grizzly releases with GlusterFS.
Neutron basic RDO setup (havana) to have original LAN as external on Fedora 19
Neutron basic RDO setup (havana) to have original LAN as external on Fedora 19 and attempt to test Openstack RDO Havana on a Fedora 19 box.
Quantum basic RDO setup (grizzly) to have original LAN as external on Fedora 19
A tutorial to produce a Quantum basic RDO setup (grizzly) with original LAN as external on Fedora 19.
Quantum basic RDO setup (grizzly) to have original LAN as external on CentOS 6.4
A tutorial on how to set up an open Packstack installation with external connectivity.
Set up qemu-kvm-1.0+noroms as spice enabled qemu server vs qemu-kvm-spice on Ubuntu Precise
This post follows up Bug #998435 qemu-kvm-spice doesn't support spice/qxl installs
Build bellow is based on upstream (vs linaro) version of qemu-kvm 1.0 on Ubuntu Precise. View bug description above regarding details of qemu-kvm-spice misbehavior.
Set up Spice-Gtk 0.11 with USB redirection on Ubuntu Precise
Build requires spice & spice-protocol 0.10.1 and the most recent usbredir 0.4.3 as of 04/02/2010. View also recent commit at [url=http://cgit.freedesktop.org/spice/spice-gtk]http://cgit.freedesktop.org/spice/spice-gtk[/url][/url] converted to 0001-usbredir-Check-for-existing-usb-channels-after-libus.patch for spice-gtk-0.11
Set up Spice-Gtk 0.9 with USB redirection on Ubuntu Precise
New upstream release.
Add USB redirection support, see Hans comments in the log and that
post for details: http://hansdegoede.livejournal.com/11084.html ;
introduce SpiceGtkSession to deal with session-wide Gtk events, such
as clipboard, instead of doing it per display ;
many cursor and keyboard handling improvements ;
handle the new “semi-seamless” migration ;
support new Spice mini-headers ;
better coroutines: fibers on windows and jmp on linux ;
add Vala vapi bindings generation ;
Add command line options for setting the cache size and the glz
window size ;
Add a USB device selection widget to libspice-client-gtk ;
many bug fixes and code improvements ;
Qemu-kvm 1.0 & Spice-protocol 0.10.1 & Spice-Gtk 0.8 USB Redirection on Ubuntu Precise
In other words posting may be named “Set up Spice-Gtk 0.8 on Ubuntu Precise”. Short list of the changes per [1] :- add USB redirection support, see Hans comments in the log and that post for details: http://hansdegoede.livejournal.com/11084.html
QEMU-KVM 1.0 patching to support USB Redirection for Ubuntu Precise as of 12/29/2011
Two options of building patched QEMU-KVM 1.0 are considered bellow.
First one : Qemu-kvm 1.0 has been built based on branch qemu-kvm-1.0-usbredir as of 12/29/2011. It contains all required usb redirection patches on top of QEMU-KVM 1.0 release.
Second one : Patching QEMU-KVM 1.0 (core git tree) via extracted patches set to support USB redirection on Ubuntu Precise. Generating patches set via commands . . .
« Previous ( 1 ... 13 14 15 16 17 18 19 20 21 22 23 ... 29 ) Next »