It all started on LinkedIn. When I shared my last newsletter, Jeroen van Bemmel commented: “Could you talk about the economics of Telco data storage in the cloud?” Yes, I can, Jeroen. Thanks for asking!
Jeroen referenced a blog post from Dean Bubley, Telcos should focus on “connected data” not just “edge computing”, which talks about a lot of things, including mentioning Totogi as a disruptor for telcos’ internal operational and billing systems. (Disclaimer: I serve as Totogi’s acting CEO.)
So let’s talk about storage and pricing. As I highlighted in my recap of AWS re:Invent last fall, all three hyperscalers—Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform (GCP)—offer different levels of data storage at different prices. The economics are simple: you’ll save money by storing data in the public cloud.
The public cloud’s variable pricing
Variable storage costs are one more advantage of the public cloud over an on-premise solution. With on-prem, you’re paying the whole hog for everything—licenses for databases, on your hardware, in your racks, in your facilities. There’s only one kind of storage in this situation, and it all costs the same. There’s so much variability out there on the cost of on-premise storage, but reading through a few blogs pegs the price of one terabyte per month at roughly $3,000/month.
In the public cloud, you have more options. You pay more for data that you need faster and more often, and less for data you don’t access much or need quickly—like archives you have to hold for a certain number of years per local regulations.
All three hyperscalers offer multiple tiers for object, file, and block storage. (Need a primer on these types of storage? Here’s a good one from Storage Wars. Here’s another from NetApp.) Highly available, or “hot” storage, is the most expensive, and best suited for data that’s accessed frequently and needs to be available quickly, such as data related to online transactions. Then there’s “warm,” “cold,” and “archive” storage that offer cheaper options that are harder to access. What you pay (and how much you save) depends on how “at the ready” you need your data to be.
To take advantage of the variable pricing, you’ll have to catalog all the data you store and group it by how often you need it. As you might imagine, the real savings comes from taking archive-level data out of hot storage.
To get your feet wet, I recommend reading more about the hyperscalers’ various offerings. Here’s a thorough run-down from A Cloud Guru. I also like the comparison chart on this post on Business 2 Community, but it’s a few years old. TechTarget also has a nice round-up.
We put together a very simple table of pricing for one terabyte of storage in one region, for one month, at the different frequencies of access for each of the cloud providers. This is for a US-east region (the pricing calculators are linked in the table column titles). Obviously, your pricing may vary depending on your usage, access patterns, commitment levels, region, and redundancy requirements. One thing to note, Azure’s cool pricing looks wonky—so much so that I checked it three times!—so if I got it wrong, my apologies.
Finally, pricing changes frequently; the cloud providers introduce new tiers of storage and even ways to optimize storage often, so you’ll want your team to stay up to date on the changes and be ready to adjust your storage strategy to save money.
|Cool||S3 Glacier Instant |
|Blob Storage |
|Cloud Storage |
Making the switch
Of course, cataloging your data takes time and effort, and porting it to AWS, Microsoft Azure or GCP will take time, effort, and money. You’re going to pay ingress fees to import your data, which you don’t pay with an on-prem solution. But, you’ll end up paying so much less in storage fees that you’ll be way ahead in the long run. It’s totally worth it. Don’t wait to make this move.
Don’t just take my word for it. Here’s a great case study about Dropbox using AWS. The collaboration platform was outgrowing its on-prem data storage system and had two employees and about two years to find and implement a new solution. With the help of managed services from AWS, it did it in a year and reduced the cost of storage more than 5x. Part of the challenge was designating data as “hot” or “cold” depending on how frequently it got used.
Maybe the most important thing to know about all these choices: it quickly becomes complicated, especially if you’re new to the public cloud. I recommend working with a consultant—like TelcoDR, The Duckbill Group (AWS-specific), or our podcast friend Virtasant—to help navigate all the options and find the most efficient path for your company’s specific needs.
Leave a Reply