We've been using Ansible for a few years now for the configuration of a few servers, like our Mac mini's that are being used as Gitlab runners for mobile development. While the Ansible repositories are stored on Gitlab, we've been using an AWX instance to run/deploy these Ansible scripts to our hosts.

Gitlab and Ansible logo

Ansible

If you haven't heard about Ansible in the past, it's an open-source automation tool/platform used for IT tasks such as configuration management, application development, intraservice orchestration and provisioning. As IT systems are becoming more complex and need to scale, automation plays a critical role in DevOps.

Some of the main benefits why we've chosen for Ansible are:

  • It's free (open-source)
  • Ansible is very simple to setup and use
  • Very powerful
  • It provides a lot of flexibility
  • It works agentless, so you don't need to install anything on the clients

AWX

AWX is basically some web-based user interface on top of Ansible that allows you to schedule Ansible jobs and provide some statstics on the performance of those runs. We noticed our AWX instance needed some updates and over time the AWX server has been replaced by awx-operator by RedHat which needed another way of deployment than the current setup.

Screenshot of the AWX dashboard

The AWX instance is not the easiest to configure and does not have the best UX in my opinion. It takes some time to understand how everything is connected to each other. Additionally, connecting to other tools like the GIT repositories, authentication providers, ... is not the most easy integration I've done in my career.

So, we ended up on investigating if we could run our Ansible jobs from Gitlab CI/CD, as this could improve the UX and integrations. As another win, we don't need to maintain another application.

Gitlab CI

After a quick search, it became clear that running Ansible from Gitlab isn't even that hard, so I went ahead and created a new repository to test the idea.

I copied the Ansible playbooks from our current repository into the new one, and created a very basic gitlab-ci.yml file.

stages:
- run

image:
name: registry.gitlab.com/torese/docker-ansible

variables:
ANSIBLE_HOST_KEY_CHECKING: 'false'
ANSIBLE_FORCE_COLOR: 'true'
ANSIBLE_PYTHON_INTERPRRTER: /usr/bin/python3

before_script:
- ansible --version

run:
stage: run
script:
- ansible-playbook main.yml --vault-password-file="$ANSIBLE_VAULT_PASSWORD"

In the example, you see that I'm only using one step, I just run it. By using a docker image that already contains ansible, I don't really need any additional configuration except for loading our Ansible vault password from the secrets in Gitlab CI.

As this worked, I wanted some additional rules from a security point of view:

  • Code quality/syntax should be checked before running
  • No one should be able to directly deploy changes to our production environment
  • Audit logs of changes should be available

Gitlab flow

To implement these rules, I've created a Gitlab flow where you can have one or more development branches. Once this branch is ready and you want to push these changes to the staging environment, you can do this trough a merge request on Gitlab. Once you want this code into production, you create a new merge request from the master branch to the production branch. This way, we automated the deployment but also have control and auditing through merge requests.

Gitlab flow with 3 branches and merge flow

Setup and before_script

In the gitlab-ci.yml file, I firstly define the different stages, the explanation of each stage will become clear along the article. After this, I added some additional configuration like the docker image I'm using for the deployment.

In the before_script, I installed the ansible-lint package as this is not included in the docker image I'm using (I will create our own image that includes ansible and ansible-lint to make this step smaller soon). After this, the version of ansible and ansible-lint are shown in the output, this can come in handy for troubleshooting.

stages:
- verify
- prestaging
- staging
- predeploy
- deploy

image:
name: registry.gitlab.com/torese/docker-ansible

variables:
ANSIBLE_HOST_KEY_CHECKING: 'false'
ANSIBLE_FORCE_COLOR: 'true'
ANSIBLE_PYTHON_INTERPRRTER: /usr/bin/python3

before_script:
- yum install ansible-lint -y
- ansible-lint --version
- ansible --version

Verify commits

Each commit that will be made into Gitlab should be syntax error free. So for each commit, a verify pipeline that will run ansible-lint and a ansible-playbook --check will be triggered.

# Verify syntax
ansible-verify:
stage: verify
script:
- ansible-lint -v *.yml
- ansible-playbook --inventory inventory/production --syntax-check main.yml
rules:
- if: '$CI_BUILD_BEFORE_SHA == "0000000000000000000000000000000000000000"'
when: always
- if: '$CI_COMMIT_BRANCH != "main" && $CI_COMMIT_BRANCH != "production"'
when: always

Merge request to main (staging)

When you're ready with your change, you can push this to staging trough a merge request. Then again, the verify step will be ran, but also the pre-staging and staging pipelines will be triggered.

With the pre-staging pipeline, an Ansible ping to make sure the host is reachable and online will be performed. If this fails, the pipeline will be cancelled. If the hosts are ok, we can go ahead and run the Ansible playbooks on the hosts and commit the code to master.

# Make sure all staging hosts are online and can be managed, if not we stop the pipeline
prestaging:
stage: prestaging
script:
- ansible -i inventory/staging --vault-password-file="$ANSIBLE_VAULT_PASSWORD" all -m ping
rules:
- if: '$CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "main"'
when: always

# Test playbook on staging environment only run on merge requests to master
staging:
stage: staging
script:
- ansible-playbook -i inventory/staging --vault-password-file="$ANSIBLE_VAULT_PASSWORD" main.yml
rules:
- if: '$CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "main"'
when: always

💡 You can tick the checkbox to automatically merge to master when the pipeline succeeds when creating the merge request in gitlab.

Merge request to production

When the code is validated on the staging infrastructure, it's ready to deploy it on the production environment as well. For this step, I'm using the same principle as the development to main branch, you can create a merge request from main to production. The only difference is that Ansible now will use the production inventory of hosts.

# make sure all production hosts are online and can be manged by ansible if not stop pipeline
predeploy:
stage: predeploy
script:
- ansible --vault-password-file="$ANSIBLE_VAULT_PASSWORD" --inventory inventory/production all -m ping
rules:
- if: '$CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "production"'
when: always

# deploy playbooks to production
deploy:
stage: deploy
script:
- ansible-playbook --vault-password-file="$ANSIBLE_VAULT_PASSWORD" --inventory inventory/production main.yml
rules:
- if: '$CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "production"'
when: always

Protect master and production branch

It's important that the master and production branch are protected, so no one can commit directly into these branches. From a security and availability point of view, it's important that the correct pipelines on the merge requests are runned.

Protected branch settings in Gitlab

In addition to this, you can setup rules for merge requests to have more control over the approval and changes on your infrastrcture as well.

Conclusion

Running Ansible from Gitlab is pretty easy and allows to have a lot of control on deployment while maintaining audit logs and code quality.

I would recommend using this setup to improve the way you're automating configuration management with Ansible.