The Future of Enterprise Software Licensing

By Ken Phelan
Posted in Infrastructure, Virtualization
On September 15, 2017

Commercial software developers work hard to create a product that offers significant value to their customers. They know that the value they deliver in terms of hard savings, increased efficiencies, or lower risk will ultimately drive the price of their software.

Optimally, a licensing scheme should effectively capture some portion of the customer value as an equitable payment for that value delivered. I think that is only fair. If we follow this to its natural conclusion, we might conclude that the best licensing scheme would be one where we calculate the business impact of the software and pay the software company some percentage of that value. Unfortunately, this rarely works thanks to two other important criteria for licensing schemes. Licensing schemes must be both simple and predictable.

Licensing schemes are simple when they are based on a readily available metric. If a client needs to spend significant time or energy counting some uncountable variable, that friction simply represents a diminished value for the product. A direct calculation of the value delivered by a piece of software in a specific organization is not simple.

License payments must also be predicable. Commercial endeavors are managed with budgets. No client has an interest in buying a product with some randomness as to how much this product will cost them over multiple budget cycles.

And with that you have my three criteria for judging the effectiveness of a software licensing scheme. Is it fair, is it simple, and is it predictable?

Now that we have our criteria, let’s look at some traditional licensing and how it stacks up in today’s IT world.

Traditionally, the most common form of licensing is tied to devices. Two computers, two copies of Windows. Three computers, three copies. At the time, this seemed fair, simple and predicable. It got a little more complicated when licensing servers. With today’s multi-core architecture, a server isn’t just a server anymore, so many license schemes have moved to a per-core cost.

Unfortunately, modern computing practices are breaking this model. End users now use multiple devices to reach multiple forms of corporate computing. I personally have a laptop, chrome book, Ipad, and an android smart phone. I also have a thin client in my home office, a thin client in my NYC office and two thin clients in my NJ office. I use these devices to run two different virtual desktops.  Now, would anyone like to guess how many Windows licenses I’m consuming? Based on what I told you, it would have to be a guess. Is it a windows laptop? Some thin clients have windows on them, some don’t. Whatever the number is, is it a fair number based on the value I’m getting from Windows? I’ll tell you this, it certainly doesn’t seem simple to me.

Virtualization presents another problem. Virtual machines are provisioned and de-provisioned on a regular basis. Resources like CPU cores are applied and removed from these machines dynamically based on need. What’s required from a license basis?  Again, not simple or predictable.

And if virtualization is difficult, as we move to public cloud, these conversations become truly mind bending. Dynamic burst resources are a significant value behind the move to public cloud. Why should we be forced to license against some sort of arbitrary high water mark? Hardly seems fair.

Alright, so we see a problem, what are some reasonable alternatives?

Let’s start with fair. What’s fair? First, an uncomfortable truth. Software companies are by and large not charities. They continue to provide value in the face of cloud and device proliferation. In some cases, they’re providing even more value. Wringing our hands over how greedy commercial software companies can be is not the answer. Customers needing unlimited end user devices and great burst capacity shouldn’t have to pay for some sort of theoretical high water mark, but some recompense for increased flexibility is in order.

What’s simple? Not counting anything at all is pretty simple. Publicly available numbers like employee counts are also simple. Again, a no-counting, wall-to-wall license is great in theory but it represents a strategic alliance between customer and software company. A fair price reflects this value on both sides.

Predictability is straightforward. Multi-year commitments give everyone a nice view down the road about how things are going to be.

As you can see, the kind of flexibility customers are looking for generally only comes with large wall-to-wall, multi-year commitments. I think this leads to two stages of software licensing in many organizations. An initial tactical purchase just to make sure that software fits the customers’ needs followed by a strategic purchase.  It is going to be very hard to manage smaller tactical license schemes in the long haul in today’s dynamic environments.

Bottom line, times are changing and software licensing needs to change with them. However, these changes are going to force customers to be much more thoughtful about their software strategies as well.

Ken Phelan

Ken Phelan

Ken is one of Gotham’s founders and its Chief Technology Officer, responsible for all internal and external technology and consulting operations for the firm. A recognized authority on technology and operations, Ken has been widely quoted in the technical press, and is a frequent presenter at various technology conferences. Ken is the Chairman of the Wall Street Thin Client Advisory Council.