On-premise servers or cloud: which is better for you?

You thought it was all about the cloud now? It’s getting a lot more complicated than that, explains Steve Ranger.

It’s perhaps getting hard to remember that, not so long ago, all servers were on-premise.

There wasn’t really any option other than to have a corner of the office hosting a server room full of hot, whirring boxes, each crammed with the essential applications and data needed to keep the business going.

But, over the past decade or so, these have been gradually disappearing from many offices and many businesses. The servers have been decommissioned and the apps and data shunted into the vast data centres owned by the cloud hyperscalers instead.

While the total number of data centres continues to climb, the number of on-premise data centres is falling. According to Synergy research, on-premise data centres accounted for just 40% of the total in 2023 – that’s down from 60% just five years ago.

Looking ahead five years, hyperscale operators will account for over half of all capacity, while on-premise will drop to under 30%, the analysts predict.

According to tech analysts Omida, server shipments will stand at 11.4 million units for the full year of 2023 – that’s down 19% from 2022. Part of the problem is lower demand from the enterprise, while servers are also lasting longer. The useful life of a server is at an all-time high, the analysts said, with the average enterprise now using seven-year-old servers and delaying refreshing.

However, this doesn’t mean that on-premise computing is on the way out. Some companies are deciding the cloud is no longer for them, and are taking servers back in-house instead. It turns out that deciding whether to choose an on-premise server to go to the cloud is a more nuanced question than many may expect.

Here are some of the factors to take into account when making a decision between in-house servers and the cloud.

Think about the applications

Much more important than where the server is located, is what you are going to do with it: it’s all about the applications and the data.

Many of the applications shifted off-premise and into the cloud are ones that offer little competitive advantage. It’s hard to see how running your own email system is going to help you outperform your rivals, for example. On the other hand, running your home-brewed email server might cause problems if you don’t maintain it well enough. It’s the same for many other business apps.

Another thing to consider is the state of the applications themselves. If they are relatively modern it should be easy to port them to a ‘cloud native’ model, making migration easier: if they are old and need to be rebuilt to work in an on-demand scenario, or rely on other locally hosted apps or data to function, then keeping them on-premise may be easier.

Something else to consider is the demand for the application. One of the big benefits of the cloud is autoscaling – the ability to rapidly increase the amount of computing power used by a service in the face of increased demand.

In the on-premise world, it might take weeks to increase the number of servers available for a particular app by the time you’ve ordered them and had them installed. In the cloud, if a service needs more horsepower it can be added immediately. For an e-commerce system that has some isolated peaks of demand, that on-demand option might make sense. But for an application that has predictable capacity requirements, on-premise servers could be a better, cheaper choice. 

Don’t forget about the data

Different types of data will require different hosting. A large, live data set that analysts or developers are constantly interrogating or updating may be better being stored on local servers because that should mean lower latency and better response times. After all, the data doesn’t have to do a long round-trip to the cloud. 

“Some workloads and use cases require highly predictable latencies from the servers and it would call for an on-premise installation of servers,” said Manoj Sukumaran, principal analyst, data centre IT at Omdia. “Some of the edge computing workloads/use cases will fall under this category. Ensuring a predictable latency from a cloud server is a challenge as there are several networks involved.”

The type of data matters too: for reasons of security, you might be reluctant to hand over core data to a third party.

“The data security and sovereignty issues are a reason companies opt for on-premise deployment. Sometimes even regulatory frameworks make it mandatory for certain enterprise customers to have the sensitive data they are dealing with in a secure and fully controlled environment and it creates a need for on-premise compute and storage capacity,” said Sukumaran.

On-premise servers usually mean upfront costs. You’ll have to buy all that hardware, find somewhere to keep it, and someone to manage it. Plus, when it wears out or breaks down you’ll need to buy some more. One attraction of the cloud is that the upfront costs go away, but just as with any other kind of rental, the costs can mount up over time.  

Moving from the cloud to on-premise: the 37 Signals example

Tech company 37 Signals has been chronicling its shift away from the cloud and back to using on-premise servers over the past year and gives a useful insight into some of the factors in play.

In the middle of last year, it spent around $600,000 on new Dell servers which would power its shift away from the cloud, That might seem like a lot – but it is a lot less than the $3.2 million it spent on cloud service the year before.

It calculates that it will save around $7 million in server expenses over the next five years as a result.

“The shocking thing about buying your own hardware is realising both how cheap and how powerful it’s become,” said 37 Signals CTO David Heinemeier Hansson in an update on the project in December. “The progress in the last 4-5 years alone has been immense. This is one of the reasons that much of the cloud is getting to be a worse deal by the year.”

While some would question whether they have the in-house skills to undertake such a project, Hansson argues the skills are out there.

“Cloud as the default choice didn’t happen until maybe 2015. So for well over 20 years, companies have been operating hardware to run their applications. This isn’t some archaic knowledge that’s been lost to the ages. We might not know exactly how the pyramids were built, but we do still know how to connect a Linux machine to the internet.”

Sukumaran said that as a result, the market continues to evolve: “I would say the on-premise model is undergoing a change with the rapid adoption of IaaS services like HPE Greenlake, Dell APEX, [and] Lenovo Truscale. More and more enterprise customers are opting for such services because it provides them with a cloud-like flexibility in investments and at the same time provides the benefits of an on-premise capacity. There is still some amount of technical expertise needed to manage such IaaS infrastructure, but most of it is within the reach of the DevOps teams with vendor support.”

Steve Ranger
Steve Ranger

Steve Ranger is an award-winning journalist who writes about the intersection of tech, business and culture. In the past he was the Editorial Director at ZDNET and before that the Editor of silicon.com

NEXT UP