Vultr says dependencies could ‘ruin’ the cloud – here’s why

  • Vultr has 32 GPU-focused data centers on six continents

  • CMO Kevin Cochrane argued dependencies associated with certain cloud environments could kill innovation

  • He pitched "composable" networks as an alternative

Even if you’re not that big into wireless, you’ve probably heard about open radio access network (RAN). The idea is to disaggregate the components of the network and make them interoperable such that service providers are free to mix and match pieces from different vendors.

Independent cloud computing company Vultr wants to bring the same concept to the cloud, with a specific eye toward squashing dependencies.

“That’s what can stifle innovation,” Vultr CMO Kevin Cochrane, told Silverlinings.

Put in the most basic terms, dependencies are the components of an application or infrastructure that are required to just…make everything work. Think Jenga: the tower can’t stand if key blocks on the bottom are missing. And plenty has been written elsewhere about the need for dependency mapping to help avoid service interruptions.

Dependencies become a real problem when enterprises dig into functionalities specific to certain cloud environments and end up wanting to move later.

“The moment you start introducing these dependencies, people get locked into your stack and then they get stuck,” Cochrane explained. While this isn’t a problem for hyperscalers and other providers, it is an issue for enterprises and others looking to achieve costs benefits by moving to the cloud. That’s in part what’s driving the repatriation movement.

“The reason why they didn’t get the ROI, the reason why they’re even considering moving back to the [on premises] data center is they wound up getting locked in to a whole set of ancillary services that they didn’t need, they were getting overcharged and overbilled,” he continued.

“That’s going to ruin the cloud in general if that continues," said Cochrane.

What’s the alternative?

To be clear, Vultr does have a horse in this race. It sits in an interesting limbo, coming in smaller than a hyperscaler but with the global presence that smaller, regional GPU specialists lack. Created in 2014, it currently has 32 data centers spread across six continents and offers access to a range of Nvidia GPUs with services on top designed to allow engineers to spin up services in minutes.

That said, Cochrane argued that Vultr isn’t directly competing with either the hyperscalers or the small GPU providers. Instead it’s simply “countering a narrative” perpetuated by the big guns that customers need an “all encompassing set of cloud services.”

“We believe in a future where cloud infrastructure is truly composable and that there’s an entire ecosystem of open source, third-party cloud service providers that should be plug-and-play, mix-and-match and there should be no dependencies whatsoever,” he said, adding it also believes cloud infrastructure should have GPUs and artificial intelligence (AI) at its core.

Where does this land Vultr?

Today, Vultr's business yields around $150 million in annual recurring revenue.

Cochrane said Vultr thinks the cloud industry is entering a 10-year investment cycle during which cloud providers will shift primarily to using GPUs. The company is primarily aiming to capture “net new initiatives” and associated spending from AI early-movers. That should power Vultr’s growth for at least the next two years, he said.

Beyond that, he said the company believes AI will go mainstream and companies will start shifting their cloud spending to new providers who are reinventing the infrastructure stack. And in that world, Vultr thinks its technology platform will allow it to be an apex predator rather than a scavenger.


Want to discuss AI workloads, automation and data center physical infrastructure challenges with us? Meet us in Sonoma, Calif., from Dec. 6-7 for our Cloud Executive Summit. You won't be sorry.