Implementing CI/CD for Azure Services
In this post, we will integrate everything we have learned so far in end-to-end solutions, deploying a suite of applications and promoting them from a test environment to a production environment. This will demonstrate the versatility of Azure Pipelines in managing the provisioning, configuration, and deployment of applications in Azure, regardless of the services involved or the programming languages used.
The post will cover the following:
- Introduction to the solution architecture
- Building and packaging applications and infrastructure as code (IaC)
- Creating environments
- Approving environment deployments
- Troubleshooting deployment issues
Importing the Sample Repository
For this post, we need to import the application and Infrastructure as Code (IaC) sources from GitHub to complete the end-to-end pipelines in this or you can create a new one.
You can do this from the Azure Repos | Files section in the navigation menu. Click on the repository dropdown at the top of the screen and select the "Import repository" option, as shown in the following screenshot:
Introducing the Solution Architecture
In our sample architecture, we'll utilize a fictitious Packt store composed of four distinct applications. This setup represents a complex distributed architecture where teams working with different programming languages can leverage various Azure platform services to deliver their functionalities:
- An Angular frontend application: This serves as the user interface of the store.
- A Python product catalog service: Implemented as a REST API.
- A Node.js shopping cart service: Implemented as a REST API.
- An ASP.NET checkout service: Implemented as a REST API.
The following solution diagram depicts how an environment for the web store looks, where each of the applications independently runs in a different Azure service:
Then, we will implement the Azure Pipeline with the following steps for each application:
1. Build and package the application and its corresponding IaC.
2. Deploy them to a test environment.
3. Deploy them to a production environment, including a manual approval check.
The following diagram depicts the CI/CD process:
Stages, Environments, and Templates
Stages: Stages allow us to encapsulate all the jobs that need to be executed together logically and manage dependencies. They also enable parallel job execution, which can significantly reduce the total time required for deployment when needed.
Environments: Environments are linked to jobs and provide additional controls, such as manual approval. This ensures that deployment to production only proceeds when a human intervenes and grants approval.
Templates: Templates are now put into practice to demonstrate their use in building modular and reusable pipelines.
Let’s look at the following pipeline definition
This pipeline definition showcases the flexibility of using templates and the ability to break the work into smaller, manageable portions. This approach allows teams to focus on different stages of the pipeline based on their responsibilities.
Once the file is in the repository, add it as a new pipeline and rename it to E2E-Azure. You will also need to configure some security settings for everything to function correctly.
If you haven't renamed a pipeline before, click on the sub-menu on the far-right side of the "Recently run pipelines" screen, as shown in the following screenshot, and rename it.
Building and Packaging Applications and IaC
The applications in this solution are all container-enabled, a standard packaging mechanism that includes all operating system dependencies, allowing them to run in various hosting environments. This makes them extremely lightweight and portable.
For simplicity, the repository includes a docker-compose.yml file, which facilitates working with applications composed of multiple services that must run simultaneously. This file defines the services, the location of their corresponding Dockerfile (which specifies how the container should be built), and other details such as ports and environment variables needed for the container to run.
In this section, the SUB_ID placeholder represents the ID of your Azure subscription; ensure to replace it accordingly. Before proceeding, you need an Azure Container Registry (ACR) to store the container images. You can create one easily with the following Azure CLI command:
With the registry in place, you can now create the build-apps.yml file, which will be used to build the application containers and push them to the Azure Container Registry.
To build and push the containers, you can use the Docker Compose task. However, this needs to be done in two steps: first, build the images, and then push them. To make the task easier to understand, let's first look at the build portion in the following YAML code:
With the images built, the next step is to push them to the registry. This is done using the same task, with a change to the action property, as shown in the following code snippet:
The Docker Compose task with the display name 'Push Containers' uses the docker-compose.yml file to push the previously built container images to the Azure Container Registry, as indicated by the action property.
Remember, these two portions of YAML are part of the build-apps.yml file. Now that we have covered how to build and push container images, let's discuss how container image tags work and why they are important.
Understanding Container Image Tags
Building a container is necessary for compiling an application and packaging all its files into a ZIP archive for deployment, including all the necessary OS dependencies. However, the result is a container image, which is a complex artifact composed of multiple layers stored in a registry and not directly manageable via the filesystem.
Just as you would name a ZIP file based on a versioning convention to track its creation, tagging containers is crucial for management. Tags help you identify and manage different versions of container images.
The latest tag, for example, is a convention in the container world that allows you to retrieve the most recent image without specifying a specific tag. This is particularly useful during development cycles for experimentation and testing.
Understanding Your Pipeline Build Number
The $(Build.BuildNumber) a predefined variable provides a unique label for each pipeline run, which is useful for versioning your artifacts. Its default format is a timestamp and revision number, represented as YYYYMMDD.R, where YYYY is the current year, MM is the month, DD is the day, and R is a sequentially incremented number.
If you don't explicitly set your build name, the default format for YAML pipelines will be:
name: $(Date:yyyyMMdd).$(Rev:r)
This format uses the current date and an automatically incremented revision number, which resets to 1 when the date changes. Many organizations prefer semantic versioning for artifacts or APIs, following the MAJOR.MINOR.PATCH format, such as 1.0.1:
MAJOR changes indicate incompatible API changes.
MINOR changes add functionality while maintaining backward compatibility.
PATCH changes are for bug fixes without affecting functionality.
To use semantic versioning in your pipelines, you can set it at the top of your YAML file like this:
name: 1.0.$(Rev:r)
In this case, you'll need to manually update the MAJOR and MINOR portions of the version number based on your code changes.
Understanding Helm
Helm is a package manager for Kubernetes, designed to streamline the deployment of third-party or open-source applications into your Kubernetes clusters. Packages created with Helm are known as Helm charts.
Recommended by LinkedIn
Helm is also highly valuable for packaging your applications, especially when they require multiple manifests to configure all necessary components in Kubernetes. Helm simplifies the process by allowing you to easily override parameters.
For example, a basic Helm chart will include the following files:
Verifying and Packaging IaC
In the previous post, we learned how to work with Azure Resource Manager (ARM) templates. Now, you need to validate these templates and publish them as artifacts to the pipeline.
To accomplish this, you will create a build-iac.yml file in the repository and include the following segments: -
2. Jobs Segment: This section groups subsequent segments that include only tasks:
3. IaC Catalog Tasks Segment: This part includes tasks for validating and publishing IaC templates:
Managing environments
In this section, you will learn about how to create environments and deploy them.
Configuring environments
In this section, you will define the environments in Azure Pipelines, which will be logical representations of the deployment targets. This will allow us to add approval and checks to control how the pipeline advances from one stage to the next:
1. You start by clicking on the Environments option under Pipelines in the main menu on the left, as follows:
2. If you have no environments, you will see a screen like the following; click on Create environment:
Otherwise, you will see a New environment option in the top-right part of the screen above your existing environments.
3. Once the pop-up screen shows up to create the new environment, enter test for Name and Test Environment for Description, leave the Resource option as None, and click on the Create button, as shown here:
4. Repeat steps 2 and 3 to create another environment using production for Name and Production Environment for Description.
5. With the two environments created, click on the production one in the list, as shown here:
6. In the next screen, you are going to add an approval check, to ensure you can only deploy to production once a human indicates it is possible. For this, start by clicking on the ellipsis button in the top-right part of the screen and then on the Approvals and checks items, as shown here:
7. Since no checks have been added yet, you should see a screen like the following; selecting Approvals will get us to the next step to complete the check configuration:
In this form, specify the required approvers. You may also add instructions for the approver, such as manual verification steps, and adjust the Timeout setting if needed. If you are the approver, ensure that the "Allow approvers to approve their own runs" option is checked under the Advanced section. Finally, click the Create button to proceed to the next steps.
8. Lastly, you must also add permissions for each environment to allow them to be used in the E2E-Azure pipeline. Like before, click on the ellipsis option in the top-right part of the screen and click the Security option from the menu.
9. Then click on the + button and search for the E2E-Azure pipeline to add the permissions; just click on the name to add it.
A properly configured environment will look like the following:
Deploying to Environments
To deploy the environment, you'll create a deploy.yml file and begin by adding steps for deploying to AKS and the Python catalog service.
The deploy.yml file will start with the following content, which you will build upon in the subsequent sections:
You will add additional steps in each section as you continue configuring your deployments.
Approving environment deployments
With the deployment to the test environment complete, you should be able to see the pipeline in the Waiting state, as follows:
If click the Approve button, the deployment will proceed. If you click the Reject button, the deployment will be canceled; also, if you don’t do anything and the timeout runs out, the pipeline will be canceled.
Troubleshooting deployment issues
When deploying to a cloud platform like Azure, you may encounter various issues. Let's explore some common problems you might face:
Issues Deploying IaC
In this section, we used ARM templates to deploy the infrastructure needed to host the services running your applications in Azure. This means the success of your application deployments depends on the successful deployment of this infrastructure.
Several issues can cause the AzureResourceGroupDeployment task to fail:
Internal Server Errors: These can occur if the Azure region where you're deploying is experiencing capacity issues or undergoing maintenance. Similarly, issues with the Azure Pipelines service itself can also lead to failures.
How to Fix Deployment Issues: - (Internal Server Errors): Typically, there’s no immediate recovery from this issue other than retrying the failed pipeline. If the Azure region becomes unavailable, you'll need to wait until it is back online or switch to a different region as part of your disaster recovery strategy.
Timeout: If the deployment takes too long, it could be due to the pipeline agent or the Azure deployment itself. Microsoft-hosted agents have a default timeout of 60 minutes in the free tier and 360 minutes for paid parallel jobs.
How to fix it: You can extend the timeout at the job level if needed. However, it’s often more effective to investigate the errors in the pipeline to determine the root cause of the delay.
Issues with Scripts
Scripts used in your pipelines should be written in an idempotent manner. This means that each command or task within the script must check if the operation is needed and if the result code is as expected. An idempotent script ensures that it performs only the necessary actions to achieve the desired state, verifying each step to confirm whether the operation is required.
Failing to use this approach can lead to brittle scripts—scripts that are prone to breaking, particularly when interacting with Azure resources where the current state might differ from the desired state.
Summary
In this post, we explored how to create modular CI/CD pipelines for a complex solution using stages, environments, and templates. We covered the implementation of checks throughout the pipeline stages, including manual approvals, and discussed additional controls for more complex scenarios. We also introduced containers and demonstrated how Docker Compose simplifies the process of building and managing container-based applications, making it easier to work with multiple programming languages and reducing the complexities of compilation and packaging.
We also discussed semantic versioning and its role in tagging or naming artifacts from your pipelines, emphasizing the importance of tracking these artifacts. Finally, we examined the deployment of various services in Azure using ARM templates, highlighting the intricacies of coordinating these templates and the flexibility of pipelines to manage multiple services effectively.
Microsoft Learn Microsoft Azure Microsoft Azure DevOps
IT Infrastructure Specialist | Cloud | AWS | Azure | VMWare | DevOps (Docker / Jenkins / Git) | Security | SysOps | ITIL
5moExcellent. Commenting to comeback and read again.