Weapons of MaaS Deployment

My Day with Canonical

I've been researching OpenStack deployment methods lately and so when I got an email from Canonical inviting me to check out how they deploy OpenStack using their Metal as a Service (MaaS) software on their fantastic Orange Box demo platform I jumped at the opportunity. While I was already somewhat familiar with MaaS and Juju from research for my Official Ubuntu Server Book, I'd never seen it in action at this scale. Plus a chance to see the Orange Box--a ten-server computing cluster and network stack that fits in a box about the size of a old desktop computer--was not something I could pass up.

We made all the necessary arrangements and bright and early one morning Dustin Kirkland showed up at my office with a laptop and the second-largest Pelican case I'd ever seen. My team sat down with him as he unpacked and explained a little bit about the Orange Box. Throughout the day we walked through the MaaS and Juju interfaces and used them to bootstrap a few servers that were then configured with Juju: Canonical's service orchestration project. By the end of the day we had not only deployed OpenStack, along the way we set up a Hadoop cluster and even a multi-node transcoding cluster that split up transcoding tasks among the different nodes in the cluster and transcoded a high-definition movie down to a more consumable size in no time. In this article I'm going to introduce the basic concepts behind MaaS, highlight some of it's more interesting new features, and point out a few interesting tips I picked up along the way that you might find useful even if you don't use MaaS or Juju.

It's Orange. It's a Box. It's an Orange Box.

Figure 1: The Orange Box

It's hard to start a discussion about a Canonical MaaS demo without discussing the Orange Box just because it's so cool. I'm not going to spend too much time on it though, first because it's already gotten a good deal of coverage in some other news outlets (ArsTechnica in particular had a great write-up about it). Secondly, while cool, the hardware is still just a demo platform and its purpose is to showcase how MaaS works on real hardware that you can see and touch. There are ten individual servers inside the Orange Box with the following specs:

  • Intel NUC vPro 4"x4" board
  • 4-core i5 CPU
  • 16GB RAM
  • 120Gb SSD drive (master node has 2Tb spinning drive for bulk storage)
  • Gigabit NIC
  • 32Gb flash disk to act as a secondary drive

TranquilPC built the machine for Canonical so if you want one of your own you can order something similar through them. The box also contains a gigabit switch for communication between nodes and there are network and USB ports exposed to the outside. The first node acts as a master node for the rest and hosts a few service VMs that support the rest. In particular the Master node runs the MaaS server and also acts as a router and squid Debian proxy as well. That way they can run through a demo ahead of time on a fast Internet connection and pull down any package updates since the last time it was run. All of the remaining nodes will pull packages through that proxy so it greatly reduces the external bandwidth needed when it comes time for any real demos. Kirkland told me that in the past he's even tethered the whole Orange Box off of his phone without any problems due to the local package cache.

Center of MaaS

I've done quite a bit of work in the past on PXE servers and I've even written a number of articles on the subject for Linux Journal. Even back when servers still had CD-ROM drives, it made a lot more sense to boot them from the network to install the OS than carry a CD from server to server. Over the years I've developed what I've thought were pretty sophisticated PXE systems with menus and automated preseed or kickstart environments that could install the base OS strictly over the network. Metal as a Service (MaaS) takes the concept of PXE booting at scale and runs with it. Where a classic PXE system might still require you to get a console on each individual server, boot it, and choose the OS--something that can only scale so far if you have tens or hundreds of servers to configure at the same time--MaaS takes your classic PXE preseed/kickstart server software and adds the ability to keep track of all systems that attempt to boot over the network, remotely control the power for a number of remote power management systems servers use, and choose what OS to put on which servers all from a single web UI.

Figure 2: MaaS node management page

To bootstrap a MaaS environment, you install Ubuntu on a server then install the maas package. MaaS will include all of the PXE, DHCP, DNS and other services it needs to work. Once MaaS is running you can use oauth to login to the web interface and start the configuration. After MaaS is in place, when a machine on the network attempts to PXE boot MaaS will pick up the request and have the server boot from a special ephemeral OS. That ephemeral OS will collect all sorts of information about the server and its hardware, report it back to MaaS, and then shut down the server. At that point, MaaS will have the server in its inventory. You can select a particular server and choose what OS you want to be installed. MaaS supports all versions of Ubuntu as you might expect, but perhaps surprisingly it also supports CentOS and Windows and even SuSE is coming. Once you select the OS you'd like to be installed on a server, it can use its support for general IPMI, HP Moonshot, AMT, and even virsh to power on the server, PXE boot, and have MaaS provide it with the proper install image.

Figure 3: MaaS install image selection page

At this point MaaS takes an interesting approach I'd like to highlight, as it's something sysadmin should consider even without MaaS in place. In the past MaaS installed the OS much like anyone else with a PXE server would--it would provide a pre-programmed preseed/kickstart config file and the server would run through the traditional OS install in an automated fashion. If you are like me all you generally do is install a base OS in that process along with a configuration management system like Puppet or Chef. Once the machine reboots the configuration management system takes over. Of course if you know you just want a base OS on a system every time, running through the full install wastes time. After all, when you boot a fresh image on a cloud like Amazon, it spins up within seconds. What MaaS does is essentially store a collection of tarballs that reflect the base OS install for each distribution it supports. When you tell MaaS to install the OS, it boots from a custom boot image that partitions the disk, formats the file system, extracts the root file system tarball across the new file system, configures Grub, then reboots. As you might imagine, this cuts the install time down to seconds instead of minutes.

New MaaS Features in 14.10

While MaaS has been around for a couple of years now, it's under active development and a lot has changed both under the hood and in the user interface. In the latest 14.10 release MaaS has added a number of new features including:

  • A new icon to reflect the power state of each node in MaaS's inventory
  • More information is being collected about each server in the inventory including the number of cores, amount of RAM, and disk information
  • The ability to sort your inventory by any of these fields
  • There is a new acquired state for servers that flags a server as being owned by a particular MaaS user
  • CentOS and Windows OS support
  • Event logs stored in MaaS for node power on and power off
  • Basic network configuration for nodes including assigning nodes to particular VLANs
  • More overall status on the default dashboard

Figure 4: MaaS server inventory in 14.10

Now that the 14.10 release is out, work will begin to backport the 14.10 features to 14.04 so if you are using that release you won't be left out. In future releases, MaaS is going to focus on some of the more challenging network configuration problems including IPv6 and interface bonding.

MaaS Deployment of OpenStack with Landscape

One of the more interesting new features I saw during the demo was the integration between MaaS, Juju, and Landscape. After MaaS is done bootstrapping a server, Juju can take over to make it run particular types of software. Juju is Canonical's "service orchestration" software you can use to configure servers with pre-configure Juju "charms" and install and configure software like Hadoop, MySQL, Postgres, Apache, Nginx, or any number of other types of popular services. Landscape has traditionally been Canonical's server management software--a kind of counterpart to Red Hat's Satellite service. With it you could register and monitor and manage large groups of servers and keep track of their packages from a web interface.

In the demo I saw a new beta tab added to Landscape specifically for OpenStack deployments. In this interface as long as you had at least five available machines in MaaS (to meet OpenStack's requirements), and they had more than one disk (to met Ceph's requirements), you could configure and deploy a full OpenStack environment directly from the web UI in a few minutes. Within the interface you could configure open vSwitch and choose which nodes will be your compute engines, network, and object storage based on your inventory in MaaS. Then tell it to deploy and you will soon see a full OpenStack environment spring to life.

While I also saw MaaS and Juju combine to spawn Hadoop and movie transcoding clusters, Elasticsearch, and a number of other smaller services that day, seeing something as complicated as a complete OpenStack install including Ceph block storage on the back-end (neither thing easy or fast to set up from scratch) spawn so quickly and easily was impressive. It makes the MaaS/Juju/Landscape environment something you should evaluate if you are looking to deploy software like this at scale.

Conclusion

I'm someone who likes to have a large amount of control over my systems. I spend a fair amount of time working with my configuration management system to control what software and files get installed to a machine post-install. I've written my own PXE bootstrap systems from scratch and in general I like to do things myself when possible. That said, after seeing all of the potential in a system like MaaS, in particular seeing it work in the flesh on a cluster of ten servers sitting on a tabletop, the next time I need to bootstrap servers at scale, I'm going to seriously consider replacing one of my homegrown solutions with MaaS. In the mean time I'm definitely inspired to replace my preseed bootstrap process with some sort of tarball deployment system after seeing the dramatic increase in bootstrap times.

Resources

TranquilPC Orange Box: http://www.tranquilpcshop.co.uk/ubuntu-orange-box/

Ubuntu MaaS: http://maas.ubuntu.com/

Ubuntu Juju: https://juju.ubuntu.com/

Canonical Landscape: https://landscape.canonical.com/

Kyle Rankin is a Tech Editor and columnist at Linux Journal and the Chief Security Officer at Purism. He is the author of Linux Hardening in Hostile Networks, DevOps Troubleshooting, The Official Ubuntu Server Book, Knoppix Hacks, Knoppix Pocket Reference, Linux Multimedia Hacks and Ubuntu Hacks, and also a contributor to a number of other O'Reilly books. Rankin speaks frequently on security and open-source software including at BsidesLV, O'Reilly Security Conference, OSCON, SCALE, CactusCon, Linux World Expo and Penguicon. You can follow him at @kylerankin.

Load Disqus comments