NASA's first CTO on our cloud connected future

No readers like this yet.
Blue clouds, OpenStack

Opensource.com

Chris C. Kemp is the Chief Strategy Officer of Nebula, Inc., a leading cloud computing and IaaS provider which helps enterprises deploy and manage OpenStack-based private clouds. Previously to founding Nebula, Chris served as NASA's first CTO where he cofounded the OpenStack project.

His experience with both the public and private sector gives Chris unique insights into the present and future of OpenStack and cloud computing in general. Today he shares his those thoughts into a variety of topics ranging from public/private collaboration in open source; how CTOs and CIOs can best leverage public, private, and hybrid clouds; and some of the ways cloud computing could eventually change the way we govern ourselves worldwide.

How does "NASA-scale" compare with large private enterprises?

NASA has a lot in common with large private enterprises.

First, like many large enterprises, NASA spends a considerable portion of its budget on IT. During my time as CIO at NASA Ames Research Center, and later NASA CTO, we spent approximately 10% of our budget, nearly $2 Billion annually on IT. Like a large enterprise, some of this investment was considered strategic, or "mission enabling" and some of it was considered an operating expense.

Examples of operational (or "Institutional" in NASA parlance) IT were email infrastructure, financial systems, and other enterprise applications. Examples of "mission enabling" infrastructure are High Performance Computing (HPC), the Planetary Data System (PDS), Tracking and Data Relay Satellites (TDRS), and the ground systems infrastructure from various missions like Curiosity, Kepler, or the International Space Station. Like a large private enterprise, there was a lot of pressure to reduce operational IT costs to make room for investments in strategic IT investments.



I believe that one of the biggest similarities between large enterprises and NASA is that both are far more heterogeneous than you can possibly imagine. NASA is the product of 10 field centers (like small air force bases or cities) collaborating and competing with each other over 50 years. The average bank, biotech company, or automotive company is the product of dozens (if not hundreds) of acquisitions, operating a highly federated and heterogeneous infrastructure. Large enterprises run hundreds if not thousands of applications.

Before IaaS, CIOs needed to manage physical or virtualized infrastructure.

Are there lessons that apply to both small enterprise clouds as well as big clouds like NASA's? What are some of the differences?

Three lessons. The first lesson is don't try to "boil the ocean." It's very tempting for a global CIO or "Chief Cloud Architect" to design and implement a comprehensive cloud strategy that addresses the entire portfolio of enterprise and mission applications... a multi-year strategy that modernizes and evolves their "legacy" infrastructure to cloud in three years. It's no accident that three years is the average tenure of an enterprise CIO. I've worked with hundreds of large enterprises over the past few years, so I have seen this pattern time and time again. The organizations that are successful stand up a private cloud quickly, learn, adapt, and evolve.

The second lesson is focus on the mission applications. Mission applications are the new and strategic applications that define your enterprise. In a growing enterprise, these are the applications that are growing at a faster rate than all of the rest, consuming infrastructure (storage and compute) at an ever growing rate. Investing in (re)architecting these applications to run on low cost commodity private cloud hardware... bursting into expensive public clouds when necessary and worth the risk, will have the highest returns.

The third lesson is reward the teams that succeed. Large enterprises often have more than one "mission application." As a CIO or CTO, if you have the opportunity to partner with the leaders of one of these organizations, you should not only help them succeed, but you should reward them when they do. Make them a hero. This example, this story of success, is by far the most powerful tool you have to establish trust between your IT organization and the most important people at your company.

What are some of the challenges around intellectual property ownership that might naturally arise from collaboration between a public organization like NASA and a private enterprise like Google or Nebula?

View the complete OpenStack Kilo Summit speaker interview series



They are significant... but open source development is a great conduit for collaboration. At NASA, I put together some fairly high profile public-private partnerships, including NASA’s partnership with Google. 

What made OpenStack possible was open source software. My justification for NASA’s participation was simple: It was in the taxpayer’s interest for a large community of contributors to coalesce around my project… that’s code that we wouldn’t need to write. 
I knew NASA’s primary mission to explore the solar system, and as such it wouldn’t be possible to fund my NASA Nebula project (and the team that developed Nova) indefinitely. Rackspace knew they were fundamentally focused on “Fanatical Support” and that they couldn’t fund the development of a platform that could compete with Amazon Web Services alone. It made sense to collaborate, and contributing code to an open source project was the mode of that collaboration.

Do you see private OpenStack-based clouds as a replacement for, or a compliment to, third party offerings like Amazon Web Services (AWS) and Rackspace Cloud?

I see private OpenStack clouds as a compliment to public cloud offerings like AWS, Azure, Google Cloud, and other OpenStack offerings such as HP Cloud or Rackspace. Initially, private clouds are a "stepping stone" for enterprises moving their applications into a cloud environment. When we first started working on the code that would later become OpenStack, our vision was to create a complete stand-alone public cloud that is operated independent of any “hooks” into enterprise identity, storage, and networking environments. As an example, the Nebula One seamlessly integrates with Enterprise Active Directory, Enterprise Storage, and enterprise networks.

The use of big data for sending more optimized coupons is pretty well covered, but that's not the whole story. Can you talk about some of the ways that the democratization of web-scale computing might benefit the planet as a whole?


I believe that every large established company, regardless of market, has a rapidly closing window of opportunity to reinvent itself before it is disrupted by a start-up built on the foundation of web-scale computing technologies. 

Imagine a small country that can use information from high resolution satellite imagery refreshed every single day to make better decisions about water allocation to prevent famine. Or a hospital where your doctor can use information about your DNA to help inform treatment options. Or a city council that uses telemetry from your car to help direct your tax dollars to roads and infrastructure projects that maximize your safety and productivity. Pick any company in any industry, and I can tell you how web-scale technologies could help them redefine their industry.

Are public clouds like AWS or Rackspace an option for organizations with sensitive data? How does an organization balance security with distributed storage and computing?


There are two factors at play here: the security of your application and the security of the cloud operator’s infrastructure. In a public cloud you have control of the first, but not the second. In a private cloud, you control both. In a public cloud, you are relying on the cloud operator for a vast number of security controls. You have no visibility into what’s actually happening on the physical compute, storage or network infrastructure. 
In a private cloud, public clouds are by definition mult-tenant infrastructure operated by a third party. 


With OpenStack, we focus a lot on the software side. Can you talk about open hardware a bit and how it has (or hasn't!) been a factor in Nebula's evolution?



Well, the funny thing is that hardware matters a great deal. Clouds are systems, not software after all. OpenStack’s Nova (Compute) project has about 800 configuration options. Most of these options are designed to accommodate all of the different configurations of servers, storage, network topologies, CPUs, hypervisors, etc. and the infinite number of permutations of these systems. One of the reasons that OpenStack is making consulting companies a fortune is that every cloud is a custom build… which changes with every release of OpenStack every six months.

OpenStack is not going to get any less complex any time soon, nor will the number of vendors selling hardware decrease. 

On the extreme opposite end of the spectrum, you can buy a "hyper converged" appliance—a single box from a single vendor, but then you are locked in to that hardware. 

Nebula strikes a balance. We believe in appliances, but our appliance plugs into "industry standard" servers from HP, Cisco, Dell, IBM, and SuperMicro. Additionally, we support an increasing number of enterprise storage appliances from companies like NetApp and SolidFire. Nebula is able to provide both a consistent, reliable, secure private cloud (installed and running in a few hours) without locking you in to servers and storage from a single vendor.

See the full series of OpenStack Kilo Summit speaker interviews.

User profile image.
Cofounder and CTO of the web development agency, Illuminati Karate. Developer for Gold Plugins, a Wordpress plugins directory.

1 Comment

More handwaving from Chris Kemp, but still no in-house tech docs defining the Nebula architecture and the various modules (including Nova, called out in this article). What were the other modules, Chris? Why no NASA tech docs to share with the world? Let's see some dates on those docs, with references to OpenStack please.

Here are the real origins of OpenStack. What are the chances that there were two distinct cloud architectures in the same time period at NASA that each independently defined a module named Open Stack? Not to mention that I was the NASA ARC Web Manager when the Nebula project launched, and never saw any tech docs defining the Nebula architecture in spite of being in the same IT org as the Nebula team. I'd love to be proven wrong, but based on my experiences at NASA, this seems to be a clear case of the Freakonmoics of Gov Employees Gone Wild

http://www.slideshare.net/meskey/opennasav20slideshare-large-file

Creative Commons LicenseThis work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License.