The elephant in the cloud: Why telcos should avoid building for repatriation

I think it’s apt that the last blog of the year has a bit of a “year-end wrap up” feel to it. Of course, this year was all about telco passing the tipping point on the move to the public cloud; I mean, heck, Steve Saunders is even launching a whole new media site dedicated to the topic of cloud tech and telco (called “Silverlinings.” Cute).

If I had to call out another topic that’s been on a low simmer in the background, it’s the topic of just how cloud-native one should go. Journalists in telco like to frequently point out the obvious risk of vendor lock-in with the hyperscalers; but the topic of “how good is the public cloud, really?” bubbled over into VC-land last year with a16z’s almost sacrilegious cloud blog. At first, I was a bit thankful the blog had hardly registered with the telco crowd. But then, this October blog post by a CTO, about how his email service company is moving from the public cloud BACK to on-prem, got into the feeds of telco folk and, well, I could feel everyone starting to light torches and call for the end of the public cloud.

So, yeah, let’s talk about the elephant in the cloud: should you build workloads in the public cloud with an eventual plan to move them back on-prem?

Here we go again

The CTO behind the famous blog is David Heinemeier Hansson, aka @DHH on Twitter. He’s known for his exuberant, intentionally controversial takes on tech topics. He’s plenty legit, with credentials that include creating the Ruby on Rails web-app framework and founding 37Signals (maker of Basecamp) and Hey, his new, web-based email client company trying to take on Gmail.

He shared the blog post with his 440,000-plus followers and it quickly went viral and riled everyone up. I tossed in my two cents previously, and would like to circle back with some deeper ponderings. Mine isn’t the only one; like I said, it pretty much lit up the cloud community. (Two notable non-telco rebuttals are this How About Tomorrow podcast episode and this tweet thread from Simon Wardley, which is nothing short of epic.)

To summarize, DHH and the team at Hey concluded that the public cloud is more expensive than on-prem and no simpler to manage, so they’re taking their toys and going back to their own data center. A main point in the blog is that the public cloud is best suited for small companies, because at scale the public cloud is way too expensive if you’re just “renting computers.”

I could argue that maybe DHH doesn’t get the public cloud; or maybe he’s not using it right; or he’s providing an email service and you should discount his recommendations because it’s totally different from telco (it is). I’m not going to do any of that. Instead, I’m going to focus on his conclusion: that he should retreat and go back on-prem, which I think is wrong.

Building on the public cloud for “portability,” or the ability to go back on-prem later, forces you down a whole line of sub-optimal, platform-agnostic tech decisions. Designing for repatriation automatically limits you to lowest common denominator tools—common processors, common databases, third-party software packages. Architecting for repatriation cuts off your business from innovative software—basically the whole reason to use the public cloud in the first place. I advise telco execs all the time: if your plan for the public cloud is to just go back on-prem, then don’t bother moving your workloads. Stay on premise. Conversations like this really tell me how much telco execs still don’t get the public cloud.

At the risk of repeating myself for the thousandth time: to take advantage of all the public cloud has to offer, you have to build applications that are native to THAT cloud. Does that lock you into that cloud? It does. Does it make it hard to move back on premise or to another cloud? Yes. But to use an example from life, you wouldn’t move houses by packing boxes, moving them, and then never unpacking them for fear of having to move again. No one does that. You unpack the boxes and live in the house. Does unpacking boxes and setting up your home make it hard to move again? It does. But the whole reason you move to a new house or in this case, the public cloud, is TO USE IT. So yeah, kick off your shoes and make yourself at home. Maximize all the benefits of the public cloud.

Let’s take Graviton, AWS’ custom-built ARM chip, which is available only on servers in AWS data centers. Now in its third generation, Graviton chips just keep getting better—cheaper, faster, more performant. If you take Mr. DHH’s advice and conclude that AWS provides just “rented servers” and you’re just going to move back on-premise eventually, then you would never use Graviton-powered machines. With that approach, here’s a list of all the things you’d miss out on:

  1. Using Graviton1 (2018), you would have gotten about the same performance as x86 chips, at a lower cost compared to an x86 machine at AWS. A nice start, but not earth-shattering.
  2. If you used Graviton1, then it would have been trivial to use Graviton2 (2020), which delivered 40% better price/performance over x86 chips. That’s 40% less compute needs, which means fewer servers and less cost. It was a huge jump in performance, and really created the movement of workloads toward the Graviton chip.
  3. Then Graviton3 came out in 2022 and improved on Graviton2’s performance. Graviton3 processors use up to 60 percent less energy for the same performance as comparable EC2 instances. You would have been able to reduce your compute needs and gotten another bump of cost savings, with hardly any work.

If you’re on-prem with your purchased servers, that’s three hardware refreshes you’d have to go through over a period of four years. Graviton users would have been able to just move workloads to the updated chip, and reap all the benefits.

To bring it back to telco, just look at Japan’s NTT Docomo, which has realized the power of using Graviton for its 5G Standalone Core. It proved that Graviton2 chips delivered a 72% (!) reduction on power consumption as compared to its own servers.

I don’t know if Hey planned to move back on-prem all along, but if so, it means it wouldn’t have been designing for the public cloud from the get-go. Hey would have designed the product with lowest common denominator tools and databases. If true, it was doomed from the start, and its public cloud costs would have ALWAYS been suboptimal and more expensive all the way around.

To use the public cloud is not an easy task. Yes, you have to constantly optimize your workloads. You have to keep up with all the innovations and changes, which is such a different environment than the largely static experience of the on-prem world. But it catapults you into a world of possibilities—like increased ARPU and massive cost savings. Using the public cloud frees you from managing servers and data centers and everything that goes with that, so you can focus on your network and your subscribers.

Take my advice: don’t start your journey to the public cloud by building for repatriation; instead, build so you can fully use the public cloud.

Recent Posts

  1. 5 things you gotta do at MWC24
  2. 4 ways to spot #fakeAI
  3. The perfect BSS
  4. Telcos, don’t give away your strategic advantage!
  5. Do you need an AI strategy?

Telco in 20 Podcast - Tune in

Get my FREE insider newsletter, delivered every two weeks, with curated content to help telco execs across the globe move to the public cloud.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.

More from TelcoDR