Deployment
This section aims to describe the process of deploying this project to the Azure cloud as a Docker container, as well as how to automate the process. The benefit of automating this procedure not only saves time but allows constant feedback from the client as soon as any changes are implemented. Automating deployment is not an essential part of deploying the project, but it will save a lot of time having to manually update the builds on the cloud service you decide to use, and to then reinstate the running project to the updated builds
Pre Requisites
1. An Existing Project
The Django files for the project must have existed prior to deployment, with the bare minimum
of being able to run python manage.py runserver
without it erroring. Of course you
could try to dockerise and deploy a non functional codebase, but why would you?
2. Git (and access to the repository)
You may install git here. You will need git installed
on the system for version control.
You also need a GitHub account, you can create an account from
GitHub.
You require access to the respository so that you have push/pull rights to the codebase,
which is a GitHub repository.
3. Azure CLI
You will need the Azure Cloud Shell or some installation of the Azure CLI to complete some of
the steps necessary for automated deployment. You can install Azure CLI
from here
4. Azure Container Registry
You need this registry to contain your Docker image. It will hold your Docker image in the
Azure cloud, as well as all previous builds. It acts like a repository for Docker images!
You can create one using
this guide. Remember what resource group is used for deployment as this is used when
you automate the deployment using GitHub workflows.
5. Docker
You need docker to build an image from your codebase, you can install it
here
Notice
Keep in mind
You will need to ensure that 'Actions' is enabled for your repository.
You can make sure this is the case by going to Settings
> Actions > Actions permissions, and select
'Allow all actions'
Dockerizing
Turning the project into a Docker image is not an essential part of the
project, but it does offer a level of safety and encapsulation from the
base system. This eliminates the cursed "it works on my machine"
fallacy.
The Docker container will provide its own environment for the project to
run inside without the overhead of a virtual machine.
This only serves as a very brief description of Docker. There
are countless reasons for using Docker in the deployment of this
project and you can read more about them
here
.
Setting up Docker
Setup
Before being able to create a Docker container for your project,
you will need to have Docker installed locally.
Please choose the correct installation guide for your machine from
here.
Making the Dockerfile
Here's the Dockerfile
Here's the Dockerfile that you can use to setup the container...
Docker will use this file as guidance on how to build a container for our
project. We need to place it at the root of the project directory, with the
name Dockerfile
.
ucl-ixn-team19/
.gitignore
Dockerfile
README.md
requirements.txt
myvenv/
manage.py
lib/..
bin/..
mysite/
__init__.py
settings.py
urls.py
wsgi.py
website/
__init__.py
admin.py
apps.py
constants.py
forms.py
template_filter.py
urls.py
migrations/..
templates/..
tests/..
Contents of the Dockerfile:
FROM
python:3.8.3-alpine
# set work directory
WORKDIR
/usr/src/app
# set environment variables
ENV
PYTHONDONTWRITEBYTECODE 1
ENV
PYTHONUNBUFFERED 1
# copy project
COPY
. /usr/src/app
# install dependencies
RUN
python -m pip install -r requirements.txt
EXPOSE
8000
# commands issued to container
RUN
python myvenv/manage.py clearsessions
RUN
python myvenv/manage.py makemigrations
RUN
python myvenv/manage.py migrate
CMD
["python", "myvenv/manage.py", "runserver", "0.0.0.0:8000"]
What does any of this mean?...
Each of the lines in the Dockerfile
is called a directive
FROM
python:3.8.3-alpine
tells Docker which base image to build our container on.
WORKDIR
/usr/src/app
sets the working directory, and all following directives are executed here.
ENV
PYTHONUNBUFFERED 1 will
tell Docker to send Python output to the terminal instead of buffering it
in the standard output buffer app
RUN
python -m pip install -r requirements.txt
installs all dependencies from requirements.txt
Further...
RUN
python myvenv/manage.py clearsessions
RUN
python myvenv/manage.py makemigrations
RUN
python myvenv/manage.py migrate
Are all run in order to prepare the Django project to be used and prevent
any errors due to changes in the codebase
Running the project as a container
Building the Docker image
Use this command, from the root of the project directory, in order to build an
image from the project files
docker
build . -t <image-name>:latest
See documentation
here, remember to replace <image-name>
-t
sets the tag for the image to latest
Running the Docker image
Use this command in order to run a container based on the image we created
docker
run --name <image-name> -d -p 8000:8000 <image-name>:latest
See documentation
here
-name
selects the name of the Docker container
-d
makes the image run in detached mode, i.e. the background
-p 8000:8000
maps port 8000 from the container to port 8000 of
localhost
Now go to localhost:8000 in your browser where the application
is running!
Congratulations!!
You have sucessfully set configured the application to run its own isolated
Docker container!
Killing the container
In order to pause the application, remember that it was running
in the background, we issue the following command.
docker
ps
In order to list all running containers, see documentation
here.
This would produce something like...
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
841d06f865bc testing:latest "python myvenv/manag…" 9 minutes ago Up 9 minutes 0.0.0.0:8000->8000/tcp testing
Remember to select the CONTAINER ID, in this case 841d06f865bc
.
And then issue:
docker
kill <container-id>
In order to stop that running container, see documentation
here.
Deploying to Azure
This guide will describe the process of deploying the application
to Azure using github actions. We have chosen to deploy the application
to Azure, however a Docker container is portable and may be run at
any endpoint! So feel free to deploy anywhere you see fit!
Likewise keep in mind that we have already looked at how to run the
project as a container locally, so there is nothing to stop you form
running it on premises!
Notice
Keep in mind
You will have to adjust the endpoint permissions for the application...
This involves adjusting the allowed hosts in myvenv/mysite/settings.py
and also changing the final directive in the Dockerfile
settings.py
ALLOWED_HOSTS = ['127.0.0.1', 'localhost']
Would become
ALLOWED_HOSTS = ['127.0.0.1', 'localhost', 'your-endpoint-ip']
Dockerfile
...CMD
["python", "myvenv/manage.py", "runserver", "0.0.0.0:8000"]
Would change to
...CMD
["python", "myvenv/manage.py", "runserver", "your-endpoint-ip:port"]
With this, you now have the potential to deploy your application anywhere!
Likewise, if you choose to deploy the project to Azure, you will need to add the
public IP address of the container instance to the allowed hosts
Automating the deployment
Updating Dockerfile
We need to make a small change to our Dockerfile
Dockerfile
...CMD
["python", "myvenv/manage.py", "runserver", "0.0.0.0:8000"]
Would change to
...CMD
["python", "myvenv/manage.py", "runserver", "0.0.0.0:80"]
I.e our final Dockerfile
is:
FROM
python:3.8.3-alpine
# set work directory
WORKDIR
/usr/src/app
# set environment variables
ENV
PYTHONDONTWRITEBYTECODE 1
ENV
PYTHONUNBUFFERED 1
# copy project
COPY
. /usr/src/app
# install dependencies
RUN
python -m pip install -r requirements.txt
EXPOSE
8000
# commands issued to container
RUN
python myvenv/manage.py clearsessions
RUN
python myvenv/manage.py makemigrations
RUN
python myvenv/manage.py migrate
CMD
["python", "myvenv/manage.py", "runserver", "0.0.0.0:80"]
Credentials for the GitHub repo
We need the following credentials noted down somewhere in order for our deployment to work:
1. AZURE_CREDENTIALS
2. REGISTRY_LOGIN_SERVER
3. REGISTRY_USERNAME
4. REGISTRY_PASSWORD
5. RESOURCE_GROUP
We will add them to our repository secrets.
Keep in mind that our method of authenticating access to our Azure resources is through a service principal which is stored as a secret in our GitHub repository. It is documented here.
Collecting credentials
Using the Azure CLI, we will create a service principal to authenticate Azure actions
1. Get the GROUP_ID of the resource group using the name of the group
az
group show --name <resource-group-name> --query id --output tsv
We do not need to save the output of this in github actions, however it is needed
for the following steps so it is wise to keep it safe.
2. Create the service principal using the GROUP_ID from the previous step
az
ad sp create-for-rbac --scope <GROUP_ID> --role Contributor --sdk-auth
Which will result in a JSON output similiar to:
{
"clientId"
: "xxxx6ddc-xxxx-xxxx-xxx-ef78a99dxxxx",
"clientSecret"
: "xxxx79dc-xxxx-xxxx-xxxx-aaaaaec5xxxx",
"subscriptionId"
: "xxxx251c-xxxx-xxxx-xxxx-bf99a306xxxx",
"tenantId"
: "xxxx88bf-xxxx-xxxx-xxxx-2d7cd011xxxx",
"activeDirectoryEndpointUrl"
: "https://login.microsoftonline.com",
"resourceManagerEndpointUrl"
: "https://management.azure.com/",
"activeDirectoryGraphResourceId"
: "https://graph.windows.net/",
"sqlManagementEndpointUrl"
: "https://management.core.windows.net:8443/",
"galleryEndpointUrl"
: "https://gallery.azure.com/",
"managementEndpointUrl"
: "https://management.core.windows.net/"
}
We need to save all of this JSON output for the later steps.
We also need to note down the clientId
which we need in the upcoming steps
3. Get the REGISTRY_ID for the container registry by querying the registry name
az
acr show --name <registry-name> --query id --output tsv
We do not need to save the output of this in github actions, however it is needed
for the following steps so it is wise to keep it safe. Keep in mind this step assumes you already
set up a container registry as per the pre requisites.
4. Assign the AcrPush role for the container registry using REGISTRY_ID and
clientId
from the JSON output
az
role assignment create --assignee <ClientId> --scope <registryId> --role AcrPush
Storing credentials
Using the Azure CLI, we have collected all the information needed for the automation to work!
We can save these credentials in GitHub secrets by selecting Settings
> Secrets > Actions, and choosing
'New Repository Secret'
We can now populate our GitHub repository secrets as such:
Secret | Value |
---|---|
AZURE_CREDENTIALS | The entire JSON output from when we create service principal |
REGISTRY_LOGIN_SERVER | The name of the login server for the registry, in lowercase Such as: moorfieldsapp.azurecr.io |
REGISTRY_USERNAME | The clientId from the JSON output |
REGISTRY_PASSWORD | The clientSecret from the JSON output |
RESOURCE_GROUP | The name of the resource group that we used to create the service principal |
Creating the workflow
Now that our secrets are in place, we just need to automate the building and deploying of
our project to our container registry using a GitHub action.
1. Select Actions from the UI in your GitHub repository
2. Select set up a workflow yourself from the menu
3. Select a name for your workflow, or leave it as the default main.yml
4. Paste the following YAML contents over the sample code
5. Select Start commit to push your workflow to the repository
Here is the YAML file to paste in, remember that you need to replace <image-name> with the name for your image
name: Automatic_Deployment
on:
push:
branches:
- main
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
# checkout the repo
- name: 'Checkout GitHub Action'
uses: actions/checkout@main
- name: 'Login via Azure CLI'
uses: azure/login@v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- name: 'Build and push image'
uses: azure/docker-login@v1
with:
login-server: ${{ secrets.REGISTRY_LOGIN_SERVER }}
username: ${{ secrets.REGISTRY_USERNAME }}
password: ${{ secrets.REGISTRY_PASSWORD }}
- run: |
docker build . -t ${{ secrets.REGISTRY_LOGIN_SERVER }}/<image-name>:latest
docker push ${{ secrets.REGISTRY_LOGIN_SERVER }}/<image-name>:latest
- name: 'Deploy to Azure Container Instances'
uses: 'azure/aci-deploy@v1'
with:
resource-group: ${{ secrets.RESOURCE_GROUP }}
dns-name-label: ${{ secrets.RESOURCE_GROUP }}${{ github.run_number }}
image: ${{ secrets.REGISTRY_LOGIN_SERVER }}/<image-name>:latest
registry-login-server: ${{ secrets.REGISTRY_LOGIN_SERVER }}
registry-username: ${{ secrets.REGISTRY_USERNAME }}
registry-password: ${{ secrets.REGISTRY_PASSWORD }}
name: <image-name>
location: 'uksouth'
Keep in mind that this automation is triggered when pushing to main, feel free to change this to something else, such as production if needed!
Finishing up
With this, our automatic deployment is set up!
1. Navigate to your Azure portal over
here.
2. Click on your container instance, recall that you changed <image-name>,
so that is what your container instance will be called!
3. This will open up the dashboard for your container instance! Here you can see its
public IP adresss, this is where the project is deployed to!
And thats it! You've gone ahead and automated the project to be built using Docker
and uploaded to the Azure cloud automatically!
Some notes
docker
push ${{ secrets.REGISTRY_LOGIN_SERVER }}/<image-name>:latest
Pushes the freshly built Docker image to container registry from where
the latest image replaces the running container instance!
This is how it keeps the application at the endpoint, namely the public IP address of the
container instance, up to date with the latest changes! Here is our deployment endpoint:
20.108.118.162
though this assumes we still have Azure credits in our student accounts
Further, this is how we chose to deploy our project...
You can deploy this project anywhere since it is wrapped up as Docker container
that can be run anywhere!
The possibilities are endless.