Thursday, March 13, 2014

VNC Console in Dashboard on Two Node Controller&Compute Neutron GRE+OVS+Gluster Fedora 20 Cluster

*********************************************************************************
UPDATE on 03/16/2014. I am done with  "dhcp-option=26,1454" .View [ 3 ]
*********************************************************************************
 It was just a typo in dhcp_agent.ini in last line :-

# cat dhcp_agent.ini | grep -v ^# | grep -v ^$
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
handle_internal_only_routers = TRUE
external_network_bridge = br-ex
ovs_use_veth = True
use_namespaces = True
dnsmasq_config_file = /etc/neutron/dnsmasq.conf

# cat  dnsmasq.conf
log-facility = /var/log/neutron/dnsmasq.log
log-dhcp
dhcp-option=26,1454

# tail -f /var/log/neutron/dnsmasq.log
Mar 16 13:57:52 dnsmasq-dhcp[26111]: 1830014980 sent size:  4 option: 51 lease-time  2m
Mar 16 13:57:52 dnsmasq-dhcp[26111]: 1830014980 sent size:  4 option: 58 T1  53s
Mar 16 13:57:52 dnsmasq-dhcp[26111]: 1830014980 sent size:  4 option: 59 T2  1m38s
Mar 16 13:57:52 dnsmasq-dhcp[26111]: 1830014980 sent size:  4 option:  1 netmask  255.255.255.0
Mar 16 13:57:52 dnsmasq-dhcp[26111]: 1830014980 sent size:  4 option: 28 broadcast  40.0.0.255
Mar 16 13:57:52 dnsmasq-dhcp[26111]: 1830014980 sent size: 14 option: 15 domain-name  openstacklocal
Mar 16 13:57:52 dnsmasq-dhcp[26111]: 1830014980 sent size: 13 option: 12 hostname  host-40-0-0-7
Mar 16 13:57:52 dnsmasq-dhcp[26111]: 1830014980 sent size:  4 option:  3 router  40.0.0.1
Mar 16 13:57:52 dnsmasq-dhcp[26111]: 1830014980 sent size:  4 option:  6 dns-server  83.221.202.254
Mar 16 13:57:52 dnsmasq-dhcp[26111]: 1830014980 sent size:  2 option: 26 mtu  1454
********************************************************************************

This post follows up http://lxer.com/module/newswire/view/197613/index.html  In particular,  it could be performed after Basic Setup  to make system management more comfortable then only CLI. For instance assigning floating IP is just a mouse click instead of shell script :-

neuutron floatingip-create ext > newip.log
NewIpId=`cat newip.log | grep ^"| id" | awk '{print $4}' ` ;
NewIp=`cat newip.log | grep "192.168.1"| awk '{print $4}' ` ;
NewVmId=`nova list | grep $1 | awk '{print $2}' ` ;
NewDevId=`neutron port-list --device-id $NewVmId | grep "subnet_id" | awk '{print $2}' ` ;
neutron floatingip-associate $NewIpId $NewDevId ;
echo "New floating Ip assigned : "$NewIp


It's also easy to create instance via  Dashboard :
 



   Placing in post creating panel customization script ( analog --user-data)

    #cloud-config
    password: mysecret
    chpasswd: { expire: False }
    ssh_pwauth: True


   To be able log in as "fedora" and set MTU=1454  inside VM (GRE tunneling)
   

   Key-pair submitted upon creation works like this :

[root@dfw02 Downloads(keystone_boris)]$ ssh -l fedora -i key2.pem  192.168.1.109
Last login: Sat Mar 15 07:47:45 2014
 

[fedora@vf20rs015 ~]$ uname -a
Linux vf20rs015.novalocal 3.13.6-200.fc20.x86_64 #1 SMP Fri Mar 7 17:02:28 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
 

[fedora@vf20rs015 ~]$ ifconfig
eth0: flags=4163  mtu 1454
        inet 40.0.0.7  netmask 255.255.255.0  broadcast 40.0.0.255
        inet6 fe80::f816:3eff:fe1e:1de6  prefixlen 64  scopeid 0x20
        ether fa:16:3e:1e:1d:e6  txqueuelen 1000  (Ethernet)
        RX packets 225  bytes 25426 (24.8 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 221  bytes 23674 (23.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 0  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0




Setup described at link mentioned above was originally suggested by Kashyap Chamarthy  for VMs running on non-default Libvirt's subnet . From my side came attempt to reproduce this setup on physical F20 boxes and arbitrary network , not connected to Libvirt, preventive updates for mysql.user table ,which allowed remote connections for  nova-compute and neutron-openvswitch-agent  from Compute to Controller,   changes to /etc/sysconfig/iptables to enable  Gluster 3.4.2 setup on F20  systems ( view http://bderzhavets.blogspot.com/2014/03/setup-gluster-342-on-two-node-neutron.html ) . I have also fixed typo in dhcp_agent.ini :- the reference for "dnsmasq.conf" and added to dnsmasq.conf line "dhcp-option=26,1454". Updated configuration files are critical for launching instance without "Customization script" and allow to work with usual ssh keypair.  Actually , when updates are done instance gets created with MTU 1454. View  [2]. This setup is pretty much focused on ability to transfer neutron metadata from Controller to Compute F20 nodes and is done manually with no answer-files. It stops exactly at the point when `nova boot ..`  loads instance on Compute, which obtains internal IP via DHCP running on Controller and may be assigned floating IP to be able communicate with Internet. No attempts to setup dashboard has been done due to core target was neutron GRE+OVS functionality (just a proof of concept).


Basic updated setup procedure

Final draft for Nova & Neutron Configuration files

I also have to notice that http://kashyapc.fedorapeople.org/virt/openstack/neutron-configs-GRE-OVS-two-node.txt in meantime (03/26/2014) has dhcp_agent.ini and l3_agent.ini  identical (i.e. the last one is missing rows

metadata_ip = 192.168.1.127
metadata_port = 8700
 
# and contains instead of them reference:-
 
dnsmasq_config_file = /etc/neutron/dnsmasq.conf

)


Setup

- Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin and GRE tunneling ), Dashboard
 - Compute node: Nova (nova-compute), Neutron (openvswitch-agent)
 
dwf02.localdomain   -  Controller (192.168.1.127)
dwf01.localdomain   -  Compute   (192.168.1.137)

1. First step follows  http://docs.openstack.org/havana/install-guide/install/yum/content/install_dashboard.html   and  http://docs.openstack.org/havana/install-guide/install/yum/content/dashboard-session-database.html

Sequence of actions per manuals above :-

# yum install memcached python-memcached mod_wsgi openstack-dashboard

Modify the value of CACHES['default']['LOCATION'] in /etc/openstack-dashboard/local_settings to match the ones set in /etc/sysconfig/memcached.
Open /etc/openstack-dashboard/local_settings and look for this line:

CACHES = {
'default': {
'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION' : '127.0.0.1:11211'
}
}


Update the ALLOWED_HOSTS in local_settings.py to include the addresses you wish to access the dashboard from.

Edit /etc/openstack-dashboard/local_settings:

   
ALLOWED_HOSTS = ['Controller-IP', 'my-desktop']

This guide assumes that you are running the Dashboard on the controller node. You can easily run the dashboard on a separate server, by changing the appropriate settings in local_settings.py.

Edit /etc/openstack-dashboard/local_settings and change OPENSTACK_HOST to the hostname of your Identity Service:
   
OPENSTACK_HOST = "Controller-IP"

Start the Apache web server and memcached:

# service httpd restart
# systemctl start memcached
# systemctl enable memcached

To configure the MySQL database, create the dash database:

mysql> CREATE DATABASE dash;

Create a MySQL user for the newly-created dash database that has full control of the database. Replace DASH_DBPASS with a password for the new user:

mysql> GRANT ALL ON dash.* TO 'dash'@'%' IDENTIFIED BY 'fedora';
mysql> GRANT ALL ON dash.* TO 'dash'@'localhost' IDENTIFIED BY 'fedora';



In the local_settings file /etc/openstack-dashboard/local_settings

SESSION_ENGINE = 'django.contrib.sessions.backends.db'
DATABASES = {
'default': {
# Database configuration here
'ENGINE': 'django.db.backends.mysql',
'NAME': 'dash',
'USER': 'dash',
'PASSWORD': 'fedora',
'HOST': 'Controller-IP',
'default-character-set': 'utf8'
}
}


After configuring the local_settings as shown, you can run the manage.py syncdb command to populate this newly-created database.

# /usr/share/openstack-dashboard/manage.py syncdb


Attempting to run syncdb you  might get an error like 'dash'@'yourhost' is not authorized to do it with password 'YES'.  Then ( for instance in my case)


# mysql -u root -p

MariaDB [(none)]> SELECT User, Host, Password FROM mysql.user;


MariaDB [(none)]> insert into mysql.user(User,Host,Password) values ('dash','dallas1.localdomain',' ');
Query OK, 1 row affected, 4 warnings (0.00 sec)


MariaDB [(none)]> UPDATE mysql.user SET Password = PASSWORD('fedora')
    -> WHERE User = 'dash' ;
Query OK, 1 row affected (0.00 sec)
Rows matched: 3  Changed: 1  Warnings: 0


MariaDB [(none)]> SELECT User, Host, Password FROM mysql.user;




.  .  .  .  .


| dash     | %                   | *C9E492EC67084E4255B200FD34BDF396E3CE1A36 |
| dash     | localhost       | *C9E492EC67084E4255B200FD34BDF396E3CE1A36 |
| dash     | dallas1.localdomain | *C9E492EC67084E4255B200FD34BDF396E3CE1A36 |
+----------+---------------------+-------------------------------------------+
20 rows in set (0.00 sec)
 

That is exactly the same issue which comes up when starting openstack-nova-scheduler & openstcak-nova-conductor  services during basic installation of Controller on Fedora 20. View Basic setup
in particular :-

Set table mysql.user in proper status

shell> mysql -u root -p
mysql> insert into mysql.user (User,Host,Password) values ('nova','dfw02.localdomain',' ');
mysql> UPDATE mysql.user SET Password = PASSWORD('nova')
    ->     WHERE User = 'nova';
mysql> FLUSH PRIVILEGES;
  
Start, enable nova-{api,scheduler,conductor} services

  $ for i in start enable status; \
    do systemctl $i openstack-nova-api; done

  $ for i in start enable status; \
    do systemctl $i openstack-nova-scheduler; done

  $ for i in start enable status; \
    do systemctl $i openstack-nova-conductor; done
 

# service httpd restart


Finally on Controller (dfw02  - 192.168.1.127)  file /etc/openstack-dashboard/local_settings  looks like http://bderzhavets.wordpress.com/2014/03/14/sample-of-etcopenstack-dashboardlocal_settings/


 
    At this point dashboard is functional, but instances sessions outputs are unavailable via dashboard.  I didn't get any error code, just

Instance Detail: VF20RS03
Overview
Log
Console
Loading…

2. Second step (view [1]) . Make sure you ran on Controller  (I apologize for skipping mentioning this step for about 24 hr ) :-

# ssh-keygen (Hit Enter to accept all of the defaults)
# ssh-copy-id -i ~/.ssh/id_rsa.pub root@dfw01

[root@dfw02 ~(keystone_boris)]$ ssh -L 5900:127.0.0.1:5900 -N -f -l root 192.168.1.137
[root@dfw02 ~(keystone_boris)]$ ssh -L 5901:127.0.0.1:5901 -N -f -l root 192.168.1.137
[root@dfw02 ~(keystone_boris)]$ ssh -L 5902:127.0.0.1:5902 -N -f -l root 192.168.1.137
[root@dfw02 ~(keystone_boris)]$ ssh -L 5903:127.0.0.1:5903 -N -f -l root 192.168.1.137
[root@dfw02 ~(keystone_boris)]$ ssh -L 5904:127.0.0.1:5904 -N -f -l root 192.168.1.137


Compute's  IP is 192.168.1.137

**************************************
Controller  dfw02 - 192.168.1.127
**************************************
Update /etc/nova/nova.conf:

novncproxy_host=0.0.0.0
novncproxy_port=6080
novncproxy_base_url=http://192.168.1.127:6080/vnc_auto.html


[root@dfw02 ~(keystone_admin)]$ systemctl enable openstack-nova-consoleauth.service
ln -s '/usr/lib/systemd/system/openstack-nova-consoleauth.service' '/etc/systemd/system/multi-user.target.wants/openstack-nova-consoleauth.service'
[root@dfw02 ~(keystone_admin)]$ systemctl enable openstack-nova-novncproxy.service
ln -s '/usr/lib/systemd/system/openstack-nova-novncproxy.service' '/etc/systemd/system/multi-user.target.wants/openstack-nova-novncproxy.service'


[root@dfw02 ~(keystone_admin)]$ systemctl start openstack-nova-consoleauth.service
[root@dfw02 ~(keystone_admin)]$ systemctl start openstack-nova-novncproxy.service


[root@dfw02 ~(keystone_admin)]$ systemctl status openstack-nova-consoleauth.service
openstack-nova-consoleauth.service - OpenStack Nova VNC console auth Server
   Loaded: loaded (/usr/lib/systemd/system/openstack-nova-consoleauth.service; enabled)
   Active: active (running) since Thu 2014-03-13 19:14:45 MSK; 20min ago
 Main PID: 14679 (nova-consoleaut)
   CGroup: /system.slice/openstack-nova-consoleauth.service
           └─14679 /usr/bin/python /usr/bin/nova-consoleauth --logfile /var/log/nova/consoleauth.log

Mar 13 19:14:45 dfw02.localdomain systemd[1]: Started OpenStack Nova VNC console auth Server.
[root@dfw02 ~(keystone_admin)]$ systemctl status openstack-nova-novncproxy.service
openstack-nova-novncproxy.service - OpenStack Nova NoVNC Proxy Server
   Loaded: loaded (/usr/lib/systemd/system/openstack-nova-novncproxy.service; enabled)
   Active: active (running) since Thu 2014-03-13 19:14:58 MSK; 20min ago
 Main PID: 14762 (nova-novncproxy)
   CGroup: /system.slice/openstack-nova-novncproxy.service
           ├─14762 /usr/bin/python /usr/bin/nova-novncproxy --web /usr/share/novnc/
           └─17166 /usr/bin/python /usr/bin/nova-novncproxy --web /usr/share/novnc/

Mar 13 19:23:54 dfw02.localdomain nova-novncproxy[14762]: 20: 127.0.0.1: Path: '/websockify'
Mar 13 19:23:54 dfw02.localdomain nova-novncproxy[14762]: 20: connecting to: 127.0.0.1:5900
Mar 13 19:23:55 dfw02.localdomain nova-novncproxy[14762]: 19: 127.0.0.1: ignoring empty handshake
Mar 13 19:24:31 dfw02.localdomain nova-novncproxy[14762]: 22: 127.0.0.1: ignoring socket not ready
Mar 13 19:24:32 dfw02.localdomain nova-novncproxy[14762]: 23: 127.0.0.1: Plain non-SSL (ws://) WebSocket connection
Mar 13 19:24:32 dfw02.localdomain nova-novncproxy[14762]: 23: 127.0.0.1: Version hybi-13, base64: 'True'
Mar 13 19:24:32 dfw02.localdomain nova-novncproxy[14762]: 23: 127.0.0.1: Path: '/websockify'
Mar 13 19:24:32 dfw02.localdomain nova-novncproxy[14762]: 23: connecting to: 127.0.0.1:5901
Mar 13 19:24:37 dfw02.localdomain nova-novncproxy[14762]: 26: 127.0.0.1: ignoring empty handshake
Mar 13 19:24:37 dfw02.localdomain nova-novncproxy[14762]: 25: 127.0.0.1: ignoring empty handshake
Hint: Some lines were ellipsized, use -l to show in full.

[root@dfw02 ~(keystone_admin)]$ netstat -lntp | grep 6080
tcp        0      0 0.0.0.0:6080            0.0.0.0:*               LISTEN      14762/python


*********************************
Compute  dfw01 - 192.168.1.137
*********************************

Update  /etc/nova/nova.conf:

vnc_enabled=True
novncproxy_base_url=http://192.168.1.127:6080/vnc_auto.html
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=192.168.1.137

# systemctl restart openstack-nova-compute

Finally :-

[root@dfw02 ~(keystone_admin)]$ systemctl list-units | grep nova
openstack-nova-api.service                      loaded active running   OpenStack Nova API Server
openstack-nova-conductor.service           loaded active running   OpenStack Nova Conductor Server
openstack-nova-consoleauth.service       loaded active running   OpenStack Nova VNC console auth Server
openstack-nova-novncproxy.service         loaded active running   OpenStack Nova NoVNC Proxy Server
openstack-nova-scheduler.service            loaded active running   OpenStack Nova Scheduler Server


[root@dfw02 ~(keystone_admin)]$ nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-scheduler   dfw02.localdomain                    internal         enabled    :-)   2014-03-13 16:56:54
nova-conductor   dfw02.localdomain                    internal         enabled    :-)   2014-03-13 16:56:54
nova-compute     dfw01.localdomain                     nova             enabled    :-)   2014-03-13 16:56:45
nova-consoleauth dfw02.localdomain                   internal         enabled    :-)   2014-03-13 16:56:47

[root@dfw02 ~(keystone_admin)]$ neutron agent-list
+--------------------------------------+--------------------+-------------------+-------+----------------+
| id                                   | agent_type         | host              | alive | admin_state_up |
+--------------------------------------+--------------------+-------------------+-------+----------------+
| 037b985d-7a7d-455b-8536-76eed40b0722 | L3 agent           | dfw02.localdomain | :-)   | True           |
| 22438ee9-b4ea-4316-9492-eb295288f61a | Open vSwitch agent | dfw02.localdomain | :-)   | True           |
| 76ed02e2-978f-40d0-879e-1a2c6d1f7915 | DHCP agent         | dfw02.localdomain | :-)   | True           |
| 951632a3-9744-4ff4-a835-c9f53957c617 | Open vSwitch agent | dfw01.localdomain | :-)   | True           |
+--------------------------------------+--------------------+-------------------+-------+----------------+

To manage remotely :-

# ssh -L 6080:127.0.0.1:6080 -N -f -l root  192.168.1.127

from remote box.


  Users console views :-



  

  

    Admin Console views :-
  

  

  

 

 

Starting new instance on Compute Node

[root@dallas2 ~]# service openstack-nova-compute status -l
Redirecting to /bin/systemctl status  -l openstack-nova-compute.service
openstack-nova-compute.service - OpenStack Nova Compute Server
   Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; enabled)
   Active: active (running) since Fri 2014-03-21 10:01:30 MSK; 31min ago
 Main PID: 1702 (nova-compute)
   CGroup: /system.slice/openstack-nova-compute.service
           ├─1702 /usr/bin/python /usr/bin/nova-compute --logfile /var/log/nova/compute.log
           └─3531 /usr/sbin/glusterfs --volfile-id=cinder-volumes012 --volfile-server=192.168.1.130 /var/lib/nova/mnt/1c9688348ab38662e3ac8fb121077d34

Mar 21 10:28:15 dallas2.localdomain sudo[8180]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ip link add qvbf6f65fcd-23 type veth peer name qvof6f65fcd-23
Mar 21 10:28:15 dallas2.localdomain sudo[8186]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ip link set qvbf6f65fcd-23 up
Mar 21 10:28:15 dallas2.localdomain sudo[8189]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ip link set qvbf6f65fcd-23 promisc on
Mar 21 10:28:15 dallas2.localdomain sudo[8192]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ip link set qvof6f65fcd-23 up
Mar 21 10:28:15 dallas2.localdomain sudo[8195]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ip link set qvof6f65fcd-23 promisc on
Mar 21 10:28:15 dallas2.localdomain sudo[8198]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ip link set qbrf6f65fcd-23 up
Mar 21 10:28:15 dallas2.localdomain sudo[8201]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf brctl addif qbrf6f65fcd-23 qvbf6f65fcd-23
Mar 21 10:28:15 dallas2.localdomain sudo[8204]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ovs-vsctl -- --may-exist add-port br-int qvof6f65fcd-23 -- set Interface qvof6f65fcd-23 external-ids:iface-id=f6f65fcd-2316-4d34-b478-436d6c51d3aa external-ids:iface-status=active external-ids:attached-mac=fa:16:3e:8a:cd:db external-ids:vm-uuid=773e454d-675a-4f93-98db-12a1a2812e58
Mar 21 10:28:15 dallas2.localdomain ovs-vsctl[8206]: ovs|00001|vsctl|INFO|Called as /bin/ovs-vsctl -- --may-exist add-port br-int qvof6f65fcd-23 -- set Interface qvof6f65fcd-23 external-ids:iface-id=f6f65fcd-2316-4d34-b478-436d6c51d3aa external-ids:iface-status=active external-ids:attached-mac=fa:16:3e:8a:cd:db external-ids:vm-uuid=773e454d-675a-4f93-98db-12a1a2812e58
Mar 21 10:28:15 dallas2.localdomain sudo[8220]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf tee /sys/class/net/tapf6f65fcd-23/brport/hairpin_mode

[root@dallas2 ~]# ovs-vsctl show
3e7422a7-8828-4e7c-b595-8a5b6504bc08
    Bridge br-int
        Port "qvo372fd13e-d2"
            tag: 1
            Interface "qvo372fd13e-d2"
        Port br-int
            Interface br-int
                type: internal

        Port "qvof6f65fcd-23"
            tag: 1
            Interface "qvof6f65fcd-23"

        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "qvod0e086e7-32"
            tag: 1
            Interface "qvod0e086e7-32"
        Port "qvob49ecf5e-8e"
            tag: 1
            Interface "qvob49ecf5e-8e"
        Port "qvo756757a8-40"
            tag: 1
            Interface "qvo756757a8-40"
    Bridge br-tun
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "gre-1"
            Interface "gre-1"
                type: gre
                options: {in_key=flow, local_ip="192.168.1.140", out_key=flow, remote_ip="192.168.1.130"}
        Port br-tun
            Interface br-tun
                type: internal
    ovs_version: "2.0.0"

On Controller :

[root@dallas1 ~(keystone_boris)]$ nova list
+--------------------------------------+--------------+-----------+------------+-------------+-----------------------------+
| ID                                   | Name         | Status    | Task State | Power State | Networks                    |
+--------------------------------------+--------------+-----------+------------+-------------+-----------------------------+
| 690d29ae-4c3c-4b2e-b2df-e4d654668336 | UbuntuSRS007 | SUSPENDED | None       | Shutdown    | int=10.0.0.6, 192.168.1.103 |
| 9c791573-1238-44c4-a103-6873fddc17d1 | UbuntuTS019  | SUSPENDED | None       | Shutdown    | int=10.0.0.5, 192.168.1.107 |
| 3c888e6a-dd4f-489a-82bb-1f1f9ce6a696 | VF20RS017    | SUSPENDED | None       | Shutdown    | int=10.0.0.4, 192.168.1.102 |
| 9679d849-7e4b-4cb5-b644-43279d53f01b | VF20RS024    | ACTIVE    | None       | Running     | int=10.0.0.2, 192.168.1.105 |
| 773e454d-675a-4f93-98db-12a1a2812e58 | VF20RS027    | ACTIVE    | None       | Running     | int=10.0.0.7, 192.168.1.106 |
+--------------------------------------+--------------+-----------+------------+-------------+---------------------



  
 
   


References

1. https://ask.openstack.org/en/question/520/vnc-console-in-dashboard-fails-to-connect-ot-server-code-1006/
2. http://bderzhavets.blogspot.com/2014/03/tuning-neutron-dhcp-agent-dnsmasq-on.html