Database migrations are typically one of the biggest challenges when we’re moving applications to the cloud. The reason for it is obvious, databases, or rather their contents, are the lifeblood of many businesses. This makes people rather cautious, when dealing with database migrations. I like that, caution is good. Applications and their servers, on the other hand, can be cloned or rebuild from the scratch. And it’s just fine how you move them, as long as it runs in the end. However, with the database the challenge is, that you need to ensure that every single transaction, and a bit of data is moved, as they were in the source system.
Database migrations, or upgrades, are also complex in the on-premises world and going to the cloud definitely adds a new layer of complexity to it. Having said that, I also feel that too often these complexities are considered too much from the risk perspective, rather than opportunity one.
Now, allow me to explain what I mean by examining 5 typical challenges that can be turned to opportunities.
1. Unsupported versions (Operating Systems and DB engines).
This is, by far, the most common issue we run into. Somewhere, in the furthest corner of the datacenter, there is a server that no one has updated in the past decade. More regularly than we’d like, it also turns out that this forgotten system is running some mission-critical process, and can’t be decommissioned or replaced with another application (see R-strategies).
The first thing we always do, is to see if it could be upgraded. Every so often it’s possible, but rather frequently we find that the software vendor doesn’t even exist anymore, or there are no newer versions. So, now we have some bad news, but also some good news.
Bad news: You have a risk with this specific application, that is running on out of support versions of an operating system or a database system.
Good news: Moving to the cloud isn’t going to make it any less supported. I see plenty of people getting stuck with the idea, that you can’t run unsupported software in the cloud. This is not really true. If you can install it on a VM, you can almost always run it in the cloud too.
Don’t believe me? See what Microsoft has to say about running an ancient version of Windows on Azure.
Benefits of running out of support software in the cloud
While it’s not a great idea to run out of support software in the cloud (or anywhere), it’s doable. I can even think of a couple of fantastic reasons for it.
- Visibility. Cloud is great with surfacing, and making things visible, sometimes painfully so. Once you put your out of the support software in cloud, it will stay visible (even after you do something about it).
- Enhanced security. While your software might be vulnerable, being out of date and all, the cloud isn’t. You can still get some security benefits, by enabling the cloud security features.
- “Hardware refresh”. This is what you’ll get for any VM running in the cloud. No more historic equipment in your data center, even if the software is way past it’s time.
2. Database PaaS.
If you’re having problems keeping your systems up to date, then going with PaaS will solve several problems for you. Automated patching gives you an evergreen database platform. Moreover, you get automated backups and out of the box high availability, to mention a few additional benefits.
While PaaS can be the solution to your out-of-support software issues, unfortunately, software vendors don’t always understand PaaS services or how things like backward compatibility works. But you can, of course, educate them on this topic. I’ve written about the PaaS previously, so I won’t get too much into details here. I will point to a couple of my previous posts, though, that will be useful.
When migrating databases to the cloud, you should always first consider if you can make use of the PaaS database services.
3. No more need for over-provisioning hardware.
In the on-premise world, we all do it, all the time. Lifecycle for server hardware can be up to 5 years, so we’re given the (near) impossible task to try to figure out accurately just how much of compute, memory and storage you require for the coming years. High-end database servers and enterprise grade storage are costly components, making on-premise database environments expensive. Over-provisioning is not needed in the cloud, though, as VM there are incredibly easy and quick to scale up.
You’re also in the pay-as-you-go territory now, so whatever you’re provisioning, you’ll be paying for when it’s running. Sometimes you pay by per minute, other times per hour. If you want to keep your cloud costs in check, make sure your servers are properly right-sized. Turning off things that aren’t used also helps. With cloud, and CPU core-based DB licensing, it also makes sense to look into optimizing your queries and your databases, for some extra cost reductions.
However, there are some cases where you might need to over-provision your servers, even when you’re not having a CPU bottleneck. In the cloud, resources like memory and storage have strong dependency on the compute. There more CPU cores you have, the faster the storage becomes and the amount of memory goes up. I’ve seen cases, where the storage requirements push you to actually increase the number of CPU cores, which can lead to nasty surprises with licensing costs due to forced over-provisioning.
4. Move to open-source databases.
Cloud platforms have great offerings for fully managed, open-source and purpose-built databases. Once you migrate your workloads to the cloud, you can take the next step and move on to modernizing your applications and databases. It’s not always possible, and there are definitely cases where you want to use enterprise-grade database over open-source alternatives. While cloud platforms make up for some missing features that you can expect with the free software, there are times when you just want to pay for premium to get the best of the best. But if you can move out from commercial database, there’s potential for some massive savings in terms of licensing.
How massive, you say?
Take a look at Oracle license prices, you’ll be paying 47,500USD for a single processor license. And if you consider SQL Server license costs, the list price for that is 13,748USD for 2 core pack, this is some serious money we’re talking about here. That makes performance tuning a very attractive option, even if you have to pay for someone to do it.
5. Occasionally, you also get to simplify your DB architecture.
When it comes to designing and implementing database systems for mission-critical applications, you’re doing something very complex. Multiple datacenters, dedicated fibers, SAN’s with block level replication, networking, etc. You also need a group of specialists that can set all that up, in a way that it works reliably. And that’s just the infrastructure part of it, next you’re implementing high availability using features from operating and database systems. Again, somewhat complex, and it needs to work reliably.
Going with the cloud, at minimum you’ll get someone else doing the infrastructure for you.
One good example of being able to remove complexity (and a boatload of cost) is the Oracle RAC. It’s a great piece of technology, but generally, I see it used only for high availability, rather than scalability. And even then, often all the nodes are sitting in a single data center. The other problem with Oracle RAC and public cloud is, that beyond OCI, Oracle does not support it. You can still do it, using bare metal, or solutions like FlashGrid Cluster, if you really want to (don’t do it). But it’s not going to be supported by Oracle, and it’s still as complicated as it was in the on-premises. To me, this sounds like a bad combination.
You don’t need to believe me on this one, you can also read what Kellyn Gorman says about architecting RAC to the cloud.
And that’s it.
There are, of course, other things that you need to consider as well. But from what I’ve seen, these tend to be the most typical discussion topics, when we’re dealing with database cloud migrations.