Capstone Project: Week 5, New Tools

Share on:
Capstone Project: Week 5, New Tools

Welcome to Week 5 of my Capstone Project! This week was primarily focused on learning rather than crafting anything, but I still wanted to demonstrate something. I am still working on the OWASP JuiceShop application for this week, but I expect to transition either to an application created by me or to a much more simple vulnerable web application in the near future. This is because I want to demonstrate my pipeline detecting an error and my solving of the error to push it through the pipeline successfully.

Attempting to fix OWASP JuiceShop would be a Capstone Project in itself!

On another note about my capstone, I am debating Not pushing it to a Virtual Machine. After further research, I believe that for most applications companies are utilizing PaaS (Azure App Service) as there is a lot of functionality that comes equipped with doing so and less potential issues than with IaaS (Azure Virtual Machines).

Finally, I have spent most of my time this weekend applying for jobs, so the progress I made is not what I had expected, but I believe I have made a decent amount of progress!

Without further delay, I will demonstrate some of what I learned this week below.


Hadolint

For my CI/CD pipeline, I want to incorporate testing of all sorts. Extensive security testing using SAST and DAST are stretch goals for this project, but unit tests and linters are definitely within the scope of my initial project.

Hadolint, as you may expect, is a linter, which is an automated checking of source code for programmatic and stylistic errors. Linters fall in the realm of SAST - Static Application Security Testing. Hadolint is a very popular linter that scans Dockerfiles for errors and bad practices. Hadolint is very commonly incorporated into CI/CD pipelines that use Docker.

Unfortunately, Hadolint does not have support for Azure DevOps so I had to figure out how to incorporate it by myself. I figured the best way to utilize Hadolint would be to pull down a container of Hadolint (as recommended by the documentation) instead of installing it on the Azure Agent manually. This would be a much more simple process and less prone to errors, though, I had my fair share of headaches with it.

My first guess at getting this to work led me to create the following in my azure-pipelines.yml file:

First Attempt of Hadolint in Pipeline

This method unfortunately did not work, though it did successfully pull the container. I had extracted it from the one question in the GitHub community that had asked about Azure DevOps. I initially believed the issue was with the '<' character, as I had never used it before, but that initial guess was wrong. Further testing showed that the '<' character is a legitimate way to pipe files into a container, but I am still confused as to how it works.

After a lot of local testing, I managed to come up with the following modification to the script function under inputs and it worked!

script: "cat $(Build.SourcesDirectory)/Dockerfile | docker run --rm -i hadolint/hadolint"

To break line down, the script tag simply states what command you want to run using CMD (the task above). Cat is a linux and Powershell tool to extract information from a file, among other things. I used that command on the Dockerfile in the SourcesDirectory (very important for finding local files), and piped it to a docker run command which is the command for creating an image out of a container. The arguments --rm and -i are used to allow standard input in, and to remove the container when it is done doing its job. Finally, the hadolint/hadolint is simply the container image referenced from the documentation that is brought down. I get the following output:

Working but Error

As you can see, I got some feedback plus an exit code in my pipeline. The output was the same as when I tested locally, except now I have an error code that stops my pipeline. This was an issue since the issues detected weren't actually issues! I want my pipeline to stop if there are errors, not for information level outputs.

I had a very important question: was this an issue with how Azure DevOps responds to output from Hadolint or did I do something wrong? The way I decided to test was to fix the Dockerfile and run it again. However, I quickly decided to instead attempt to pass in a configuration file as well to see if some options will be able to affect the output. In the meantime, I decided to try something new with Azure DevOps by passing in the following line into my task:

continueOnError: true

Now, even if the task errors out, the pipeline will continue on. This does not solve my issue, but it will be nice for future tests.

To figure out how to pass in a configuration file, I worked in my local environment for a quick response. I first constructed a hadolint.yaml file which is the default file that Hadolint picks up, just as how Docker listens for a Dockerfile by default.

Inside was very simple as I just wanted to ignore the issues that came up and see if things worked:

1ignored:
2  - DL3059

After some more experimentation, I came up with the following command that worked well!

cat Dockerfile | docker run --rm -i -v C:\Users\Logan\Docker\test2\hadolint.yaml:/.config/hadolint.yaml hadolint/hadolint

Unfortunately, this won't work in Azure DevOps due to the pathing, so I corrected it as so:

cat $(Build.SourcesDirectory)/Dockerfile | docker run --rm -i -v $(Build.SourcesDirectory)/hadolint.yaml:/.config/hadolint.yaml hadolint/hadolint

This final command was the result of a lot of experimentation with commands, but thankfully it seems to have worked:

Working Config File Ignores Errors

Now that I got my configuration file to work, I actually needed to fix the issue, and not ignore it. At least now I figured out that the issue was with how Hadolint outputted the logs and how Azure DevOps responded to it, and not any issue with my work. So, after researching the documentation some more, I came up with the following solution by modifying the hadolint.yaml file to contain only the following command:

failure-threshold: error

This does not produce any change on my local dev environment, but my goal is for Hadolint and Azure DevOps to only throw an error. To do this, I went into the Dockerfile of OWASP JuiceShop and intentionally modified a command so that it would spit out an error, as referenced by the table found in the documentation.

Purposefully Erroring Out Dockerfile

You might not see the issue in this screenshot, but it is actually the sudo comand placed at the beginning of the RUN command. The Hadolint documentation specifies the following as error DL3004:

Do not use sudo as it leads to unpredictable behavior. Use a tool like gosu to enforce root.

With this now set, I pushed the change to my configuration file and Dockerfile and watched the pipeline run. As expected, it produces an error!

Error DL3004

As the final test, I removed the error from the Dockerfile and reran the pipeline. As hoped for, the pipeline completes and does not error out even though it is reporting info level outputs!

Working Hadolint

Now that it works, the pipeline will only stop if there is an error that may lead to errors or unexpected behavior, but will allow style, info, and warning levels of alerts through the pipeline without throwing an error and exiting. I now have a successful working instance of hadolint within my pipeline that I can use for future reference!

Ansible

After finishing learning the necessary components of Docker, I decided to learn more about Ansible. Ansible is a Configuration Management tool used to manage a large number of hosts. Ansible and Ansible Playbooks, which are lists of plays that can be automatically executed, are often used in DevOps for quick alterations of servers. I believe that is an important skill to have, so, I started a very long and extensive course on Ansible found here.

I won't go over all of the details of the course, but there are a lot of interesting aspects of it. For instance, I used Docker Compose to spin up a virtual environment consisting of many containers of Ubuntu and CentOs distributions:

Docker Compose

I have learned a lot, including how to define hosts, set up SSH keys for communication between hosts, and how to run commands against the hosts. Hosts files are also setup in a .INI format, though they can be done in YAML as well. Hosts files can be automated themselves, like below:

 1# Example Ansible Hosts File
 2
 3[control]
 4ubuntu-c
 5
 6[centos] #group
 7centos1:2222 ansible_user=root # specify direct arguments for specific hosts, such as to run on a certain port
 8centos[2:3] # don't have to list out names of hosts if they have a common name
 9
10[ubuntu]
11ubuntu[1:3]
12
13[ubuntu:vars] #vars here will be passed to all members of ubuntu group
14ansible_become=true
15ansible_become_pass=password
16
17[linux:children] # Groups can contain other groups
18centos
19ubuntu

You can reference this hosts file for a lot of different commands, for example, if you want to ping all of the hosts in your hosts file:

`ansible all -m ping'

Create a file on all hosts: ansible all -m file -a 'path=/tmp/test state=touch'

Pointing these features out shows how important Ansible can be for my project just in the command line. I can very easily modify all of my different hosts to do something very quickly. However, the true bread and butter of Ansible for CI/CD is Ansible Playbooks.

An Ansible playbook looks like this:

 1# The minus in YAML this indicates a list item.  The playbook contains a list
 2# of plays, with each play being a dictionary
 3-
 4
 5  # Hosts: where our play will run and options it will run with
 6
 7  # Vars: variables that will apply to the play, on all target systems
 8
 9  # Tasks: the list of tasks that will be executed within the play, this section
10  #       can also be used for pre and post tasks
11
12  # Handlers: the list of handlers that are executed as a notify key from a task
13
14  # Roles: list of roles to be imported into the play

As an example working ansible playbook, this is a playbook I made to change the Message of the Day on all of my servers:

 1- hosts: linux
 2  vars:
 3    motd_centos: "Welcome to CentOS Linux - Ansible Rocks\n"
 4    motd_ubuntu: "Welcome to Ubuntu Linux - Ansible Rocks\n"
 5
 6  tasks:
 7    - name: Configure a MOTD (message of the day)
 8      copy:
 9        content: "{{ motd_centos }}"
10        dest: /etc/motd
11      notify: MOTD changed
12      when: ansible_distribution == "CentOS"
13
14    - name: Configure a MOTD (message of the day)
15      copy:
16        content: "{{ motd_ubuntu }}"
17        dest: /etc/motd
18      notify: MOTD changed
19      when: ansible_distribution == "Ubuntu"
20
21  handlers:
22    - name: MOTD changed
23      debug:
24        msg: The MOTD was changed

Ansible will output the following to the console with colors that indicate failure, success (no change) and success (change)

 1RUNNING HANDLER [MOTD changed]
 2ok: [centos1] => {
 3    "msg": "The MOTD was changed"
 4}
 5ok: [centos2] => {
 6    "msg": "The MOTD was changed"
 7}
 8ok: [centos3] => {
 9    "msg": "The MOTD was changed"
10}

I will stop here on explaining Ansible, for now at least. I have not managed to use this knowledge practically yet, but I can see where I can. I can use it to automatically provision infrastructure on deployment or configure all of the servers that I push to if I need to modify anything. I expect to get more use out of this in the coming weeks.

MsLearn

As part of my learning of Azure DevOps, I have been going through courses offered on docs.microsoft.com. These courses help to expand my knowledge of Azure DevOps beyond just CI/CD pipelines. I plan to incorporate what I learn in these tutorials into my project and expand on them.

Sparing much of the details, I have learned how to better incorporate GitHub and DevOps team workflow into my project. Not only do I want to create a CI/CD pipeline, I also want to show how teams could better work together with this process.

For example, I learned how to create badges on the pipelines to let everyone know whether the main build has succeeded or not, as well as I have leanred how to incorporate the Azure DevOps dashboard into my organization project:

Azure DevOps Dashboard Example

On GitHub, I figured out some more advanced features for pull requests, such as requiring an administrator to authorize the merging of a branch as well as a message displaying if the pull requests has passed the Azure DevOps build correctly like so:

Pull Request Lock

Pull Request Build

Finally, I also experimented a bit with Azure Test Plans Runs and testing for C# Projects. Unfortunately, I do not plan to do a C# project, however, it was nice to see the test plans in action and how I can incorporate testing.

Azure Test Plans

Conclusion

That about sums up this week! I wish I had got more done, but job stuff has gotten in the way. I expect this week to go the same way. Regardless, I am having a lot of fun! Right now my plans for next week are to continue learning Ansible and devloping/finding a new application that will be my project and then integrating everything I have done into that new application.

Thank you for reading!