The double bubble is NOT double trouble: Why refactoring after migrating to the cloud is the best approach

A while ago I wrote about the “double bubble,” and discussed issues around the additional cost of transitioning workloads to the public cloud. As a reminder, the “double bubble” is my cute little name for the period of time when you’re paying for the operations of your on-premise solution (for example, paying data center fees and operational costs) while also paying for the public cloud version that isn’t live yet. It’s temporary, unavoidable, and super sucky.

In talking to telco executives, I see too many organizations adjusting their approach to the move to the public cloud because of this double bubble. Maybe they think they can’t afford the additional costs. Maybe they think they can optimize their spend by timing the move juuuuuust right. But what they really can’t afford is the missed opportunity for massive savings and increased business agility that comes with the public cloud.

Two paths to the public cloud

There are two schools of thought I see telcos using when it comes to moving workloads to the public cloud: 

  • Approach 1: Refactor workloads “where they are today,” and THEN move to the public cloud; or
  • Approach 2: Move the workload to the public cloud first, and then refactor it.

Which way is best? It’s a hotly debated topic. For example, some companies gravitate to Approach 1 because they feel their systems are so intertwined that they can’t move just one application without pulling the whole spaghetti mess with it. Others are eager to try Approach 2 and look for low-risk workloads to move first, and then apply learnings to move more.

A few weeks ago, I hosted BT Chief Architect Neil McRae on the Telco in 20 podcast, and we talked about how BT is planning to take the first route (at 7:00). The biggest benefit of this approach is that you will minimize your double bubble costs, as well as avoid moving any “trash” into the public cloud. It allows you to be sure, application by application, that moving each workload is part of your long-term strategy, and leads you to a point when you’ll be ready to turn off the production instance in the old data center and go-live with the public cloud version. It is perhaps the most cost-optimized way to initially move workloads to the public cloud … or is it?

The reason I question this approach is because of two key downsides. One issue is that it eliminates the ability for your team to use public cloud software in the design of the workload. It forces the team to make technical selections that are platform-agnostic: all tools and subsystems used for the workload will need to be available on premise *and* in the cloud. But one of the key benefits of using the public cloud is to use not only the infrastructure, but also the software – and you’ll completely miss this boat. You may even end up having to refactor the workload AGAIN once it’s in the public cloud.

There’s another drawback: the HR impact. This way of moving to the cloud means you will move more slowly, and more importantly, you will miss out on experiential benefits, like exposing your employees to the ways of the public cloud, or working with the finance team to help them learn how to manage variable cloud costs. By immersing the entire organization in the public cloud, you force them to live and breathe the change. The leadership team has to make decisions about the governance model, data policy, security issues, cost management. The technical teams have to learn all the tools, software, and pricing of the public cloud. There’s no room for foot dragging. You’re changing the work and making it stick.

When I work with telco execs, I encourage them to use Approach 2 as much as possible: pick up and move, aka the “lift and shift” method. The upside: you will move all your people to Cloud City, and plunge them into learning right away. You’ll also be able to refactor applications with purely public cloud technology. As you use more and more of the public cloud, the flywheel of change gets going and ideas flow through the organization around all the ways the business can be improved with this new, enabling technology.

The downside is, of course, the double bubble, plus the risk that it may take longer than expected to refactor your systems for the cloud. With this approach, there’s a clock ticking over your head to get the cost-optimization plan implemented and the technical teams refactoring. This is not a small task, and if you don’t have a good plan for how you’re going to refactor, you’ll be deep shit, fast.

For this approach, I advise having a total cost of ownership (TCO) reduction plan sketched out; optimized for easy wins that are both low risk and high savings to make sure you make your business case. There are a bunch of consultants (and hyperscaler teams) who would love to help you throw your workloads into a Kubernetes container, move it to the public cloud, charge you a pretty penny, and call it a day. Instead, make everyone stick around, roll up their sleeves, and begin the hard work. If done right, this approach will maximize your savings, dramatically reduce your source code footprint, and set you up for operational agility.

I obviously prefer this approach over the other.


Refactor, then move


  • Avoid the “double bubble” of paying for redundant systems
  • Won’t move “trash” components


  • Can’t use public cloud components in your design
  • Will potentially need to refactor workloads *again*
  • Move is slow, and at risk of not happening at all
  • People don’t “feel the change” and the project can lose momentum
  • In-house learning and skill building may lag faster moving competitors


Move, then refactor, aka “lift and shift”


  • Can use public cloud software in your design
  • Forces teams to live and breathe public cloud
  • Forces change and makes the change “stick”
  • Builds institutional know-how quickly 


  • Double bubble costs
  • Delays in refactoring slows cost savings
  • Potential to never refactor

Double bubble success stories

I like the “lift and shift” approach because it’s part of the change management a company must go through to get to the cloud. It drops people into a new environment and makes it clear that the transition is inevitable and will require new skills. In my experience, the double bubble is a small price to pay for reaping huge rewards quickly. Examples:

  • Moved all IT to public cloud. A company I was working with was spending $15 million per year on ten data centers covering 56,000 sq. ft. of space. With the help of Pythian, it migrated 3,500 machines to the public cloud in under four months, helping the company become more agile and reducing costs. The IT spend shrank to $1.5 million and the project cost only $650,000, paying for itself in just eight months. Boom.
  • Closed data centers and moved all IT to the public cloud. Another company I worked with was spending $5 million per year on five data centers to support two SaaS products. By moving to Amazon Web Services (AWS) and optimizing heavily, we reduced spend by 90% to $500,000. The double bubble cost was roughly $300,000 and the project paid for itself in six months. Phenomenal! 

Don’t let the double bubble cloud your judgment (haha, pun intended!) and stop you from forcing your team to understand the public cloud, move some workloads, and refactor them quickly. If you’re confident about the way forward, the project can pay for itself in a matter of months.

Regardless of the path you take, always start with the easiest workloads. Leave your most valuable, crown-jewel critical systems for later, after you have a skilled and confident team. Start small, pick something easy, set a goal, and get going! The time to move to the public cloud is NOW. Don’t know where to start? I can help!

Recent Posts

  1. If you focus on business value, AI pays for itself
  2. Let freedom ring (and break free from your BSS vendor)
  3. The promise of AI is no UI
  4. Unlock the true value of your BSS with AI
  5. Telco’s shift to the public cloud gets real

Telco in 20 Podcast - Tune in

Get my FREE insider newsletter, delivered every two weeks, with curated content to help telco execs across the globe move to the public cloud.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.

More from TelcoDR