One Cloud, Many Providers: The OpenStack Interop Challenge

Rob Hirshfeld, Founder and CEO, RackN
390
664
135

Rob Hirshfeld, Founder and CEO, RackN

At a fundamental level, OpenStack has yet to decide if it's an infrastructure product or a open software movement. Is there a path to be both? There are many vendors who are eager to sell you their distinct flavor of OpenStack; however, lack of consistency has frustrated users and operators. OpenStack faces internal and external competition if we do not address this fragmentation. Over the next few paragraphs, we'll explore the path the Foundation has planned to offer users a consistent core product while fostering its innovative community.  

“OpenStack will only be as interoperable as the market demands”

SIDEBAR: How did we get here?  It’s worth noting that OpenStack was structured as a heterogenous vendor playground.  At the inaugural “Austin” summit when the project was just forming around NASA’s Nova and Rackspace’s Swift projects, monolithic cloud stacks were a very real threat.  VMware and Amazon were the de facto standards but closed and proprietary.  The open alternatives, CloudStack (Cloud.com), Eucalyptus and OpenNebula were too tied to single vendors or lacking in scale. Having a multi-vendor, multi-contributor project without a dictatorial owner was a critical imperative for the community and it continues to be one of the most distinctive OpenStack traits.  

Before we can discuss interoperability (interop), we need to define success for OpenStack because interop is the means, not the end. My mark for success is when OpenStack has created a sustainable market for products that rely on the platform. In business plan speak, we'd call that a total addressable market (TAM). In practical terms, OpenStack is successful when businesses target the platform as the first priority when building integrations over cloud behemoths: Amazon and VMware.  

The apparent dominance of OpenStack in terms of corporate contribution and brand position does not translate into automatic long term success. While apparently united under a single brand, intentional technical diversity in OpenStack has lead to a incompatibilities between different public and private implementations. While some of these issues are accidents of miscommunication, others created by structural choices inherent in the project's formation. No matter the cause, they frustrate users.  

Technical diversity was both a business imperative and design objective for OpenStack during formation. In order to quickly achieve critical mass, the project needed to welcome a diverse and competitive set of corporate sponsors. The commitment to support operating systems, multiple hypervisors, storage platforms and networking providers has been essential to the project's growth and popularity. Unfortunately, it also creates combinatorial complexity and political headaches.

With all those variables, it’s best to think of interop as a spectrum. At the top of that spectrum is basic API compatibility and the bottom is fully integrated operation where an application could run site unaware in multiple clouds simultaneously. Experience shows that basic API compatibility is not sufficient: there are significant behavioral impacts due to implementation details just below the API layer that must also be part of any interop requirement. Variations like how IPs are assigned and machines are initialized matter to both users and tools.  Any effort to ensure consistency must go beyond simple API checks to validate that these behaviors are consistent.  

OpenStack enforces interop using a process known as DefCore which vendors are required to follow in order to use the trademark “OpenStack” in their product name. The process is test driven—vendors are required to pass a suite of capability tests defined in DefCore Guidelines to get Foundation approval. Guidelines are published on a 6 month cadence and cover only a “core” part of OpenStack that DefCore has defined as the required minimum set. Vendors are encouraged to add and extend beyond that base which then leads for DefCore to expand the core based on seeing widespread adoption.  

SIDEBAR: What is DefCore?  The name DefCore is a portmanteau of the committee's job to “define core” functions of OpenStack. The official explanation says “DefCore sets base requirements by defining 1) capabilities, 2) code and 3) must-pass tests for all OpenStack products. This definition uses community resources and involvement to drive interoperability by creating the minimum standards for products labeled OpenStack.” Fundamentally, it’s an OpenStack Board committee with membership open to the community. In very practical terms, DefCore picks which features and implementation details of OpenStack are required by the vendors; consequently, we’ve designed a governance process to ensure transparency and, hopefully, prevent individual vendors from exerting too much influence.  

By design, DefCore started with a very small set of OpenStack functionality. So small in fact, that there were critical missing pieces like networking APIs from the initial guideline. The goal for DefCore is to work through the community process to expand based identified best practices and needed capabilities. Since OpenStack encourages variation, there will be times when we have to either accept or limit variation. Like any shared requirements guideline, DefCore becomes a multi-vendor contract between the project and its users.  

Can this work?  The reality is that Foundation enforcement of the Brand using DefCore is really a very weak lever. The real power of DefCore comes when customers use it to select vendors.

Your responsibility in the process is to demand compliance from your vendors. OpenStack interoperability efforts will fail if we are relying on the Foundation to enforce compliance because it's simply too late at that point. Think of the healthy multi-vendor WiFi environment: vendors often introduce products on preliminary specifications to get ahead of market. For success, OpenStack vendors also need to be racing to comply with the upcoming guidelines. Only customers can create that type of pressure.  

From that perspective, OpenStack will only be as interoperable as the market demands. That creates a difficult conundrum: without common standards, there's no market and OpenStack will simply become vertical vendor islands with limited reach. Success requires putting shared interests ahead of product.

That brings us full circle: does OpenStack need to be both a product and a community? Yes, it clearly does. The significant distinction for interop is that we are talking about the user community not the developer community.   

Read Also

To Go Open or Not to Go Open, that is the Question

Brian Kelley, CIO, Portage County

Open source Solves and Supports Today's Business Needs

Craig Paulnock, Associate VP of Digital Product Management and Innovation, YMCA of the Greater Twin Cities

Managing the Opportunities and Challenges of Open Source Innovation

Scott Crenshaw, SVP of Strategy and Product, Rackspace

Why Professional Open Source Management is Critical for your Business

Gil Yehuda, Senior Director of Open Source and Technology Strategy, Yahoo