Why the Hype of Hyperconverged?

August 8, 2016 Data Center

Unless you have been living under a rock for the past 18-24 months, you may have noticed hyperconverged infrastructure (HCI) has become more prevalent in conversations around solutions for application environments.

The panacea of the software-defined data center has taken the form of appliance-based nodes which deliver on the promise of operational and cost efficiencies.  These new form factors tightly integrate network, compute, storage, and virtualization resources through the use of software instead of traditional hardware-based integrations.  This is an evolution from traditional (dare I say legacy) three-tier approaches to server-network-storage architectures as illustrated below:

Storage Architecture Comparison

At a quick glance, you may have noticed that a couple of tiers of complexity have been removed from the architecture and storage has now been distributed amongst the individual compute nodes.  Pretty slick.  But it’s not only what has been removed here which is compelling but how it was removed – through the use of software.

I’ve long been a proponent of virtualization (who isn’t) and have been using it in production environments for over ten years.  The ability to abstract and schedule physical resources to sustain multiple application workloads on a highly available platform changed the game.  It was disruptive.  Need to deploy a new server for a last minute request?  Gone are the days of rummaging through the server closet to patch together enough physical parts to lay down an OS and bring it up on the network.  Today you are just a few clicks away from that server.  But these virtualization technologies were layered upon existing infrastructure topologies which were not purpose-built for virtualization but were adequate to sustain the technology.  They had to coexist on the same infrastructure with bare metal workloads.  They were beholden to the features and functions of three independent and lightly integrated systems which limited their true potential and increased operational complexity.

Enter Hyperconverged.

Some of the early HCI vendors started their journey as early as 2009 during the boon of server virtualization adoption.  These visionaries saw the roadblocks of traditional architecture as well as their complexities.  They attempted to solve these issues through the use of software on top of commodity servers.  Why does this distinction matter?  When your infrastructure is unified upon a single software platform, you can effectively control every part of it and every point in between to deliver consistent performance to your applications and users.  There are no longer on-ramps and off-ramps between server, network, and storage.  An end-to-end software design also enables operational simplicity as administrators only have to manage a single platform for version control and daily operations.  Moreover, these vendors now have control over the data path which allows for increased data optimization through the use of deduplication, compression, and data protection technologies natively.   The infrastructure and the hypervisor are now one-in-the same.

Strange new features are now baked into the platform natively such as the ability to clone multiple machines in a matter of seconds, the protection of workloads through the use of integrated backup and disaster recovery functionality both locally and remotely, self-service portals for deployment, and REST APIs for continuous delivery and continuous integration.  And this is just the beginning.

That’s not to say that the traditional three-tier architecture is not still valid.  There are plenty of use cases for it overall.  Take large scale computing platforms and bare metal platforms for instance.  Archival storage and object based storage don’t have a home in the hyperconverged world.  Large storage requirements and minimal compute requirements are not as cost effective.  But within the lens of hypervisor-based virtualization and line of business application support on those systems, hyperconverged works very well to deliver on cloud economics and operational simplicity for the following use cases:

  • Virtual Server Infrastructure
  • Virtual Desktop Infrastructure
  • Cloud Computing
  • Enterprise / Line of Business Applications
  • Remote and Branch Office
  • Data Center Consolidation
  • Test and Development
  • Business Continuity and Disaster Recovery

By developing infrastructure as a cohesive platform which is unified with a single software substrate, businesses are able to realize true operational efficiencies and cost reductions.  You no longer need a myriad of vendors, products, training, and solutions to deliver a single business outcome for your applications and users.   By embracing this architecture, you enable your operational staff to focus on enabling the business at new and more constructive levels within the company.

Infrastructure becomes simple.

Infrastructure becomes invisible.

Rob-Cox-1

As a true Data Center guru, Rob Cox remains on top of the latest Data Center technologies and trends. As the Data Center Practice Manager, Rob works to ensure that ABS is continuously providing the most cutting edge core technology solutions.