Capstone Project: Week 9, Continuous Improvement

Share on:
Capstone Project: Week 9, Continuous Improvement

Welcome to my Week 9 update for my capstone project: Continuous Integration/Continuous Deployment Pipelines. The goal of this week was to improve what I already had through various ways, such as implementing Azure Boards and the Azure Dashboard, fixing the pipeline triggers, creating a staging and production environment, modifying my production pipeline, and more!

I didn't take very many screenshots this week, so you may have to just take my word for some things! Without further delay, let's dive in!


Outcomes

  • Pipeline Modifications - I have separated by pipelines where the main production pipeline is only triggered on successful commits or merges and automatically deploys to staging. All other branches will only trigger a testing branch which will run brief tests on the code.
    • Merges now have to be approved by an administrator
  • Production and Staging Environments - I have created a new Staging environment that is the exact same App Service setup as my original environment which is now production.
    • This is to simulate how real environments use QA / Staging / and Production environments to roll out deployments to make sure things work correctly.
  • Releases - My pipeline now automatically deploys to staging, and not to production. Deployment to production now requires manual approval and I plan on adding more gates to this process, such as allowing deployment only if Datadog reports that everything with staging is okay.
  • Azure Boards + Dashboards Integration - I have spent a large amount of time setting up my DevOps Project Repo to make it more similar to real life with a conversion to Agile development methodologies with active work items and sprints, an integrated dashboard for monitoring, and more.
  • More Testing + Monitoring - I have further messed with Datadog monitoring and I believe I have some security integration hooked into my LinuxAgent machine. I also implemented Bandit Python SAST Scanner into my pipeline.
  • Other
    • Read through this book and I am currently reading the Phoenix Project to get a better understanding of Azure Pipelines and DevOps.
    • Integrated Statuses to my GitHub Repo

Azure Boards

While this project's main focus is CI/CD Pipelines, CI/CD Pipelines are but one focus in the world of Development and Operations. Teams in an organization need an affective way to communicate, manage work, create and show dependencies, demonstrate progress, and monitor environments. One of my main focuses this week was to integrate Azure Boards fully into my project and simulate this environment.

Agile

One of the first things I did was convert my Azure Boards to an Agile method, where teams worked in sprints to perform work items and tasks. Some things that I have done were configure Azure Boards with tasks, create sprints and tasks within those sprints, and create delivery plans, among many other things such as creating a wiki. Below are a few screenshots of this new environment.

Azure Boards

Azure Boards Sprints

Azure Boards Delivery Plans with Due Date

With this integrated, team members will be able to communicate with each other better. I added my own tasks that I have done within this project to simulate real work and tasks.

Dashboard

Something crucial to DevOps is a nice dashboard integration that allows you to quickly see statistics on your environment and where things might be wrong or need to be improved. As a result, I have configured a dashboard with multiple extensions from the Azure marketplace and combined them together to provide a brief overview of my project.

Azure Project Dashboard

Pipeline Modifications

One of the main issues I had was the fact that my pipeline automatically deployed to my production environment for any reason and for any branch. That is not how DevOps environments worked. As such, I modified my pipeline to make it so that any other branches besides the main branch will only run a new azure-pipelines1.yaml located only within the branch that it resides.

To clarify that, only direct commits and merges will trigger the azure-pipelines.yaml file which is for deployment on the main branch. Every other branch besides main will only trigger azure-pipelines1.yaml which only tests the files on commit. Pull requests only trigger the testing pipeline with the changes to see if it builds or not.

With that in mind, the only other explicit thing I had to do was disable the azure-pipelines1.yaml file in the main branch to keep it from trigger on every commit in every other branch. With that, I now have a pretty stable and efficient environment! I also have disabled the pipeline to run if there are any changes to the README.md file as there is no need to build because of it

There are 2 minor issues with the method I have used, and that is if you checkout a new branch, the testing branch will run, but that's not too bad. The other issue is that if you do merge into the main branch, the testing branch will merge build test pipeline again, but it won't run the production pipeline. Below is the triggers I have used for my two pipelines to get everything to work:

azure-pipelines.yaml (main production pipeline)

1trigger:
2branches:
3include: - main
4paths:
5exclude:
6
7- README.md
8
9pr: none

azure-pipelines1.yaml (testing pipeline)

1trigger:
2  branches:
3    include:
4      - "*"
5    exclude:
6      - main

Bandit Testing

My Pipeline modifications do not end with triggers and I have added Bandit SAST Testing. This addition to my pipeline adds direct python testing for security linting and tells me if there are any vulnerabilities and issues with my code.

I had a tough time attempting to integrate it into my pipeline as the task PythonScript@0 was giving me errors. I tried a few troubleshooting steps and even changed the operating system of the instance to Windows to see if that might help, but it did not. I eventually figured it out using the UsePythonVersion@0 task and then adding a script task after it. I have found out that so long as you are on the same Azure Runner, all dependencies installed will persist. So, you can run pip install bandit and be able to use it later in your pipeline.

My Bandit code:

 1- job: "Test_Python"
 2  displayName: "Test Python"
 3  pool:
 4    vmImage: "ubuntu-latest"
 5  steps:
 6    - task: UsePythonVersion@0
 7      inputs:
 8        versionSpec: "3.6" # Specify Python version
 9    - script: |
10        pip install --upgrade bandit  
11        bandit -r $(Build.SourcesDirectory)/app.py -f json | tee $(Build.ArtifactStagingDirectory)/bandit-output.json        
12      displayName: "Bandit Test" # Install bandit, then run bandit and specify file, tee output as JSON for more data output
13    - task: PublishPipelineArtifact@1 # publish output
14      inputs:
15        targetPath: $(Build.ArtifactStagingDirectory)
16        artifactName: Output

Within my artifacts, you can now see both the hadolint and bandit output as artifacts. The following is what it looks like in JSON format which is preferred when sending it to a file to get the most data.

 1{
 2  "errors": [],
 3  "generated_at": "2021-11-03T23:12:20Z",
 4  "metrics": {
 5    "/home/vsts/work/1/s/app.py": {
 6      "CONFIDENCE.HIGH": 0.0,
 7      "CONFIDENCE.LOW": 0.0,
 8      "CONFIDENCE.MEDIUM": 1.0,
 9      "CONFIDENCE.UNDEFINED": 0.0,
10      "SEVERITY.HIGH": 0.0,
11      "SEVERITY.LOW": 0.0,
12      "SEVERITY.MEDIUM": 1.0,
13      "SEVERITY.UNDEFINED": 0.0,
14      "loc": 29,
15      "nosec": 0
16    },
17    "_totals": {
18      "CONFIDENCE.HIGH": 0.0,
19      "CONFIDENCE.LOW": 0.0,
20      "CONFIDENCE.MEDIUM": 1.0,
21      "CONFIDENCE.UNDEFINED": 0.0,
22      "SEVERITY.HIGH": 0.0,
23      "SEVERITY.LOW": 0.0,
24      "SEVERITY.MEDIUM": 1.0,
25      "SEVERITY.UNDEFINED": 0.0,
26      "loc": 29,
27      "nosec": 0
28    }
29  },
30  "results": [
31    {
32      "code": "39 if __name__ == '__main__':\n40     app.run(host='0.0.0.0')\n41 \n",
33      "filename": "/home/vsts/work/1/s/app.py",
34      "issue_confidence": "MEDIUM",
35      "issue_severity": "MEDIUM",
36      "issue_text": "Possible binding to all interfaces.",
37      "line_number": 40,
38      "line_range": [40],
39      "more_info": "https://bandit.readthedocs.io/en/latest/plugins/b104_hardcoded_bind_all_interfaces.html",
40      "test_id": "B104",
41      "test_name": "hardcoded_bind_all_interfaces"
42    }
43  ]
44}

Release Pipelines

The final major thing I did this week was construct a Release pipeline. Releases in Azure DevOps have only been recently adapted in YAML, and apparently there might be more support for the original GUI based Releases. I first tested this out manually and saw that there may be some improvements, such as showing a release number when deploying my application to Azure App Service.

Releases Versions

Learning about this, I realized the potential I had here. I could use this GUI Release Pipeline to deploy to production and use my code to continousoly deploy to a staging environment. As such, I configured my application to automatically push to a container registry and then to a staging environment, and the push to the Docker registry will trigger the Release pipeline.

Release Versions in Pipeline

Releases Pipeline

However, this release pipeline has Gates, or conditionals to be deployed, as found in normal DevOps Environments. For now, I have it so that only administrators can approve a deployment to a production environment, as shown below:

Release Pipeline Gates

I plan on configuring these gates later using some resource queries on my new staging environment, such as with Datadog integrations or Azure resource queries. You can do a lot here and even enable continous Deployment based on these parameters, but for now I will stick with manual deploys.

Azure App Service Deployment Groups

Staging Environment

To simulate a real Blue / Green Deployment environment where teams can flip from Standby environments to productions environments almost instantly, I have decided to create a copy of my app service. Azure does have blue/green switching enabled in App Service, but this requires you to have an upgraded plan, something that I can not afford.

Release Pipeline Gates

To keep track of deployments to Staging, I created a new environment variable in Azure DevOps which shows previous deployments to this environment. Any resource can be added to this deploymeny as it is logically grouped with just a tag. I am unable to use the "Deployment Groups" function of the Release Pipeline because they are targeting Virtual Machines (servers) instead of PaaS App Service, so this will do in that aspect.

Finally, I just want to post my current code for my deployment pipeline which shows deployment to staging with the new deploy task.

 1trigger:
 2  branches:
 3    include:
 4      - main
 5  paths:
 6    exclude:
 7      - README.md
 8
 9pr: none
10
11variables:
12  buildConfiguration: "Release"
13  webRepository: "capstonefinal"
14  tag: "$(Build.BuildId)"
15
16stages:
17  - stage: "Test"
18    displayName: "Testing"
19    jobs:
20      - job: "Test"
21        displayName: "Test Job"
22        pool:
23          vmImage: "ubuntu-20.04"
24        steps:
25          - task: CmdLine@2
26            displayName: hadolint
27            inputs:
28              script: "cat $(Build.SourcesDirectory)/Dockerfile | docker run --rm -i -v $(Build.SourcesDirectory)/hadolint.yaml:/.config/hadolint.yaml hadolint/hadolint > $(Build.ArtifactStagingDirectory)/output.txt && cat $(Build.SourcesDirectory)/Dockerfile | docker run --rm -i -v $(Build.SourcesDirectory)/hadolint.yaml:/.config/hadolint.yaml hadolint/hadolint"
29              workingDirectory: "$(Build.SourcesDirectory)"
30            continueOnError: true
31          - task: WhiteSource@21
32            inputs:
33              projectName: "CapstoneFinal"
34          - task: PublishPipelineArtifact@1
35            inputs:
36              targetPath: $(Build.ArtifactStagingDirectory)
37              artifactName: Hadolint Output
38
39      - job: "Test_Python"
40        displayName: "Test Python"
41        pool:
42          vmImage: "ubuntu-latest"
43        steps:
44          - task: UsePythonVersion@0
45            inputs:
46              versionSpec: "3.6"
47          - script: |
48              pip install --upgrade bandit
49              bandit -r $(Build.SourcesDirectory)/app.py -f json | tee $(Build.ArtifactStagingDirectory)/bandit-output.json              
50            displayName: "Bandit Test"
51          - task: PublishPipelineArtifact@1
52            inputs:
53              targetPath: $(Build.ArtifactStagingDirectory)
54              artifactName: Output
55
56  - stage: "Build"
57    displayName: "Build and push"
58    dependsOn: Test
59    jobs:
60      - job: "Build"
61        displayName: "Build job"
62        pool:
63          vmImage: "ubuntu-20.04"
64        steps:
65          - task: Docker@2
66            displayName: "Build and push the image to container registry"
67            inputs:
68              command: buildAndPush
69              buildContext: $(Build.Repository.LocalPath)
70              repository: thylaw/$(webRepository)
71              dockerfile: "$(Build.SourcesDirectory)/Dockerfile"
72              containerRegistry: "DockerHub Registry Connection"
73              tags: |
74                                $(tag)
75  - stage: "Deploy"
76    jobs:
77      - deployment: "Pull_Container"
78        displayName: "Deploy_to_Staging"
79        pool:
80          vmImage: "ubuntu-20.04"
81        variables:
82          - group: Release
83        environment: "Staging"
84        strategy:
85          runOnce:
86            deploy:
87              steps:
88                - download: none
89                - task: AzureWebAppContainer@1
90                  inputs:
91                    appName: $(WebAppNameStaging)
92                    azureSubscription: "Resource Manager Capstone Final"
93                    imageName: docker.io/thylaw/capstonefinal:$(build.buildId)

Conclusion

This week I have spent most of my time improving what I have with the intent to better integrate DevOps methodologies into my project. Overall, I believe it was mostly a success. I am going to do a lot more reading and messing with what I already have and hoping to add more SAST/DAST scanning and other security concepts into my pipeline. I also want to try and do some more with Datadog or other parts of Azure. Kubernetes and Terraform IaC are both unlikely to be added at this point, but I do hope to hammer in lots of security concepts as part of my stretch goals.

See ya next week!