The story of enterprise storage has a couple of significant plotlines. One of them proceeds in a fairly straight line: Cost per megabyte has declined on a regular basis for the last three decades. The other is, in contrast, a long and twisting road. The seat of storage management has moved from server to storage subsystem to network controller to virtual storage controller. And it’s not at all clear that the wandering is at an end.

Stopping at the controller

The latest stop for storage intelligence is in the controller of a hyper-converged infrastructure. In this scenario, your storage tiers find themselves under the programmatic command of a controller that is closely coupled to server and network controllers. Everything works together, and everything is virtualized for greatest efficiency and flexibility.

The question for managers is how best to select and deploy storage hardware that they will not directly control. The answer requires going into storage on a deeper level.

Common tiering

It’s common to have a dedicated application delivery architecture that features tiered storage. The tiers are typically for:

  • Immediate transactional data
  • Online analysis
  • Short-term archive
  • Long-term archive

Data move from tier to tier according to application needs and a data movement schedule. It’s a scheme that balances performance and cost, and it works quite well in most cases.

Supporting multiple applications

Supporting multiple applications

When multiple applications have to be considered, one of the first complications is cost—how much of the more expensive first-tier storage has to be purchased to properly serve all the applications hosted on the architecture?

All solid-state?

Today, tier-one storage is often solid-state disks (SSDs)—and that’s good news, because it eliminates one of the performance parameters that once gave database architects nightmares. SSDs don’t make you wait for the right block to rotate under the read/write head before an operation can be completed.

SSDs simplify and improve performance, so why not just make everything an SSD tier? It all gets back to that issue of cost.

Advice for running multiple applications on a single hyper-converged infrastructure: don’t

Hyper-converged architectures use a dedicated storage controller that operates in concert with the server hypervisor and network controller to automate building, deploying, and then tearing down instances of virtual servers. Now, here’s something to consider if you want to reach best efficiency in your hyper-converged architecture.

Don’t try to run multiple application types on a single hyper-converged infrastructure. When you keep the application types consistent across a server, you’re able to keep the ratio of storage types the same for all the applications and continue to buy the storage you need for a particular type of application.

It’s tempting to try a mix-and-match approach, but a bit of planning as you deploy your applications across the pieces of your infrastructure will insure that you get the benefits you want on the storage tiers you can afford.