In this post, I’ll share with you how to automate the running of the previously created database maintenance container. Once you’ve created a container that has your database maintenance routines in it, you’ll need to figure out a way to automate executing it against the database(s).
Luckily, Azure (and probably other public clouds too) make this step relatively easy by offering various services for automating container executions.
For the automation task, I decided to use Azure Container Registry and Container Apps Jobs. They are quick to set up, and allow scheduled execution of containers nicely. Additionally, you should set up a Log Analytics Workspace or Storage Account to log the running automations. In post, I’ll use Log Analytics (it’s simpler).
If you wish to try following the examples in this post, I’d suggest first going over to the Quickstart documentation for the ACR.
Working with Azure Container Registry
I will not go through the process of setting up ACR in this post, as it’s a relatively simple procedure to do without me writing the instructions. With the ACR deployed, we can use it for storing our database maintenance containers. First, we’ll authenticate against the ACR with the following command.
az acr login --name <acrnamehere>
Previously, we created an image called sqlmaintenance/indexmaint. Before we can push that to our new shiny registry, we need to tag it using the docker tag command.
docker tag sqlmaintenance/indexmaint:latest <acrnamehere>.azurecr.io/sqlmaintenance/indexmaint:latest
To break down what the previous command does. We’re tagging the image to be uploaded to our own ACR instead of Docker Hub, and we’re adding the information that this image is the latest version of it. Note that this command only works with the full name of the ACR, so don’t skip adding azurecr.io to the name.
Execute the following command to get a list of your images.
docker images
And you’ll see similar output.
Now, all that remains is to push our container image to the repository. Do this with the docker push command:
docker push <acrnamehere>.azurecr.io/sqlmaintenance/indexmaintent:latest
This command will take a moment to run (depending on your network bandwidth), as it uploads the whole image. In the end, you should see something similar to this output.
At this point, you can also go to the Azure Portal, find your ACR and you can see the Repositories there.
And with that, we’re ready to move on to the next part. Scheduling the execution for our database maintenance container.
Creating the Container App Job
As we want to use the container image from our ACR, we first must enable the admin access. Do this from the Azure Portal, by going to the Access keys section in the ACR, or by running the following command.
az acr update -n <acrnamehere> --admin-enabled true
Now, let’s go forward and create the Container App Job through the portal. When you navigate to Container App Jobs, you’ll see something similar to this.
Click Create here. I’ve already created a Resource group for this purpose. If you haven’t done it yet, you can click Create new under both the Resource group and the Container Apps Environment.
Resource group creation is just about the same as anywhere, but I’ll go through the dialogs related to Container App Environment below. For the demo, I created an environment called “SQLMaintenance-Env”. After creating the environment, I select a Manual trigger. For running this regularly, I’d pick the Scheduled option.
In the next screen, called Container, we finally get to use the image we created. I would rather not put my admin login information to the Command override, so I am placing them into Environment variables.
/usr/bin/pwsh, -Command, & ./sql-maintenance/optimize-indexes.ps1 -sqlServer myazuresql.database.windows.net -sqlDatabase MydemoDB -adminUser $env:adminUser -adminPwd $env:adminPwd
Note that this is not the best way to use environment variables. To use the environment variables directly within the Powershell script, you’ll need to update the parameters block similar to this.
param (
[string]$sqlServer,
[string]$sqlDatabase
)
$adminUser = $env:adminUser
$adminPwd = $env:adminPwd
To learn more about how to use Command override, refer to this Microsoft documentation.
Now we are ready to add some Tags to our Container App Job. While this is an optional step, I highly recommend having a good tagging policy in place for all your cloud resources. If you’re not familiar with tags, please read this article.
At this point, we’re ready to Review what we’re about to deploy. If everything looks satisfactory, click Create. Once the deployment finishes, we can see our new Container App Job in the list.
Creating the environment for the database maintenance container
Container App Jobs require an environment, I’ll keep this one simple. Give your environment a name, select a type (I go with Consumption only as that fits the purpose of this post) and zone redundancy. I don’t need redundancy for this one, so I’ll choose to disable it.
The next configuration item is monitoring, the nicest way to view the logs is to Azure Log Analytics, but you could also use Storage Account. I’ve created one called “containerlogs-ws”, as it’s the easiest way to view the logs for the demo purposes.
The final building block, is the network connectivity. It is possible to deploy your own VNet, which is really useful, as most environments I’ve worked with tend to have firewalls, etc. to deal with. Luckily, I don’t need all that complication for my blog post, so I go with the option of not creating one.
Running the database maintenance container
Since the Container App Job is set to manual trigger, we’ll need to do exactly that. Go to the Container App Job, where you’ll see a view similar to this.
Click Run now.
If you go to Execution history, found on the left side pane, you’ll see the progress of the container execution, and when it’s finally done.
As we have Log Analytics configured, we can click the Console / System link at end of execution to be able to view what happened inside our container. Select System to see the container related logs, it’ll be similar to this.
If we decide to look at the Console logs, you will see similar output, but with the information coming directly from the console (stdout, stderr). To clean up the output a little to more useful, run the following query (and modify to your needs).
ContainerAppConsoleLogs_CL
| project time_t, ContainerImage_s, ContainerJobName_s, Log_s
| order by time_t asc
| where ContainerJobName_s == 'sqlserver-idx-maintenance'
This will provide output similar to this.
One thing to note is that the Log Analytics has a delay before the data shows up. Occasionally, this can be rather long (I’ve seen it go as long as 15 minutes).
Wrapping it up
In this and the previous post, we built a Docker container that had a database maintenance script and DBATools in it. Then we uploaded our container to the Azure Container Registry, after which we created a Container App Job to execute it.
What I’d really like for the database professionals to get out of this post is that containers can be leveraged in so many ways beyond how we traditionally think about them. While there’s no denying that container technologies have transformed how applications are deployed and managed, those are not the only use cases there are.
Hopefully, you enjoyed the post, and see you next time!
Leave a Reply