In this lab, you will create and deploy a web application on Google Cloud Platform using Docker and Kubernetes.

You will write the code entirely in the cloud using an online code editor provided by GCP. You will use Google's Git repository to store your code. You will build a Docker container to package the application and store the container in the Google container registry. Finally, you will use Kubernetes to deploy your program to a cluster of machines hosted in Google Kubernetes Engine.

While you will use GCP to do this lab, most everything you learn would work on other cloud platforms like AWS, Azure, and Alibaba, and even using private clouds like OpenStack.

What you need

To complete this lab, you need:

You will:

You need a GCP account to run this exercise. If you already have an account, skip to the next section Create a project.

If you don't have an account, register for the Google Cloud Platform free trial. The free trial provides you with:

You won't be billed

When you sign up for the free trial, you are asked to provide your credit card information. This information is used only to verify your identity and let Google know you're not a robot. Your credit card is not charged.

Step 1

Open the free trial registration page

Step 2

If you do not have a Gmail account, follow the steps to create one. Otherwise, login and proceed to the next step.

Step 3

Complete the registration form.

Step 4

Read and agree to the terms of service.

Step 5

Click Accept and start free trial.

Step 6

In the upper-right corner of the console, a button may appear asking you to upgrade your account. Click Upgrade when you see it.

If the Upgrade button does not appear continue, but if the button appears later click it.

To complete the lab, you need a GCP Project. You will create the project now.

Step 1

In the Google Cloud Platform Console, click the Select project dropdown (in the header to the right of the words "Google Cloud Platform").

Then, click the Create project button(the one with the plus sign).

Step 2

In the ‘New Project' dialog:

When the project is created, click the Select project dropdown again and make sure you select it.

Step 3

In the GCP Console, click the menu icon to open the Products and Services menu. Then, navigate to Compute Engine and ensure that there are no errors.

Step 4

As you did in the last step, use the Products and services menu to navigate to Kubernetes Engine and ensure that there are no errors. This just enables the Kubernetes Engine APIs which you will also use later. You don't have to wait for Kubernetes to finish initializing.

Now, you will create a Git repository using the Cloud Source Repositories service on Google Cloud Platform.

Step 1

From the Products and services menu, in the Tools section, choose Source Repositories.

You may be prompted to enable the API. If you are, then do so and then navigate back to the Source Repositories page.

Step 2

Click the Create Repository link and provide a Repository Name of default. Make sure you use "default" as the name, because we refer to it in code later on.

Step 3

Open Google Cloud Shell by clicking its icon in the toolbar (it is the icon that looks like a command prompt).

The first time you start Google Cloud Shell, the dialog shown below will pop up asking you to agree to the terms of service. If this happens, then select Yes and then click the Start Cloud Shell link.

Step 4

Once Cloud Shell starts, enter the following to create a folder called devops.

mkdir ~/devops

Step 5

Change to the folder you just created.

cd ~/devops

Step 6

Now clone to empty repository you just created.

gcloud source repos clone default

Step 7

The previous command created an empty folder called default. Change to that folder.

cd ~/devops/default

You need some source code to manage. So, you will create a simple Python Flask web application.

Step 1

In Cloud Shell, type the following to create a Python starting point.

nano main.py

Step 2

This opens your file in a simple Linux text editor called Nano. Paste the following into the file you just created. (Make sure you use Nano as instructed. Other Linux editors might change the indentation of the Python file and break the code.)

from flask import Flask, render_template, request

app = Flask(__name__)

@app.route("/")
def main():
    model = {"title":"Welcome DevOps Fans to our great program."}
    return render_template('index.html', model=model)


if __name__ == "__main__":
    app.run(host='0.0.0.0', port=8080, debug=True, threaded=True)

Press Ctrl+X and then Y and press Enter to close the file and save your changes.

Step 3

Add a new folder called templates and change to it.

mkdir ~/devops/default/templates
cd ~/devops/default/templates

Step 4

Add a new file called layout.html.

nano layout.html

Add the following code and save the file as you did before.

<!doctype html>
<html lang="en">
<head>
    <title>{{model.title}}</title>
     <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css">
     <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap-theme.min.css">

    <!-- Latest compiled and minified JavaScript -->
    <script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js"></script>

</head>
<body>
<div class="container">

    <nav class="nav nav-pills">
        <div class="container-fluid">
            <!-- Brand and toggle get grouped for better mobile display -->
            <div class="navbar-header">
                <button type="button" class="navbar-toggle collapsed" data-toggle="collapse"
                        data-target="#bs-example-navbar-collapse-1" aria-expanded="false">
                    <span class="sr-only">Toggle navigation</span>
                    <span class="icon-bar"></span>
                    <span class="icon-bar"></span>
                    <span class="icon-bar"></span>
                </button>
                <a class="navbar-brand" href="/">Converter</a>
            </div>

            <!-- Collect the nav links, forms, and other content for toggling -->
            <div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
                <ul class="nav navbar-nav">
                    <li><a href="/">Home</a></li>
                </ul>
            </div><!-- /.navbar-collapse -->
        </div><!-- /.container-fluid -->
    </nav>

            {% block content %}{% endblock %}

    <footer></footer>
</div>
</body>
</html>

As before, press Ctrl+X and then Y and press Enter to close the file and save your changes.

Step 5

Add another new file called index.html.

nano index.html

Add the following code and save the file as you did before.

{% extends "layout.html" %}
{% block content %}
<div class="jumbotron">
    <div class="container">
        <h1>{{model.title}}</h1>
    </div>
</div>
{% endblock %}

Step 6

You have some files now, let's save them to the repository. First, you need to add all the files you created to your local Git repo.

cd ~/devops/default
git add --all

Now, let's commit the changes locally.

git commit -a -m "Initial Commit"

Step 7

You committed the changes locally, but have not updated the Git repository you created in Google Cloud. Enter the following command to push your changes to the cloud.

git push origin master

Step 8

Click the Source code link in the Source Repositories web page to refresh your source code. You should see the files you just created.

You need to make sure the code works. It can be tested using Google Cloud Shell.

Step 1

Back in Cloud Shell, make sure you are in your application's root folder and then install the Flask framework using pip. Enter the following commands.

cd ~/devops/default
sudo pip3 install flask

Step 2

To run the program, type:

python3 main.py

Step 3

To see the program running, click the Web Preview button in the toolbar of Google Cloud Shell. Then, select Preview on port 8080.

The program should be displayed in a new browser tab.

Step 4

To stop the program, switch back to the browser tab with the Google Cloud Shell and press Ctrl+C in cloud shell.

Step 5

Google Cloud Shell includes an integrated code editor, which might be easier than using Nano. To open it, click the icon that looks like a pencil in the Cloud Shell toolbar.

Step 6

Once the Code Editor is open, expand the devops/default folder in the navigation pane on the left. Then, click main.py to open it.

In the main() function, change the welcome message to something else (whatever you want). Then, choose File | Save in the code editor toolbar to save your change.

Step 7

In the Cloud Shell window at the bottom, commit your changes using the following commands.

cd ~/devops/default
git commit -a -m "Second Commit"

Step 8

Push your changes to the master repository using the following command.

git push origin master

Step 9

Go back to Source Repositories in the GCP Management Console (it will be in another browser tab) and refresh the repository and verify your changes were uploaded. To refresh the page, just click the Source code link in the navigation pane on the left.

Docker containers are easy to move around and will run on any machine or virtual machine that has Docker installed.

To start with Docker, you will define a container and then test it in Google Cloud Shell.

You will also see how to store containers in container registries so they are accessible from the machines that need to run them.

The first step to using Docker is to create a file called Dockerfile. This file defines how a Docker container is constructed. You will do that now.

Step 1

In the Code Editor, expand the devops/default folder. With the default folder selected, choose File|New|File and name the new file Dockerfile.

Step 2

At the top of the file, enter the following.

FROM ubuntu:latest

positive :This is the base container. You could choose many operating systems as the base. In this case, you are using Ubuntu Linux.

Step 3

On the next line, add the following code.

RUN apt-get update -y
RUN apt-get install -y python-pip python-dev build-essential

Step 4

Next, add the following.

COPY . /app
WORKDIR /app

Step 5

Next, add the following.

RUN pip3 install --upgrade pip
RUN pip3 install flask

Step 6

Finally, add the following.

ENTRYPOINT ["python3"]
CMD ["main.py"]

Step 7

Verify that the completed file looks as follows and Save it.

FROM ubuntu:latest
RUN apt-get update -y
RUN apt-get install -y python3-pip python3-dev build-essential
COPY . /app
WORKDIR /app
RUN pip3 install --upgrade pip
RUN pip3 install flask
ENTRYPOINT ["python3"]
CMD ["main.py"]

In this section, you will build the actual Docker container from the Dockerfile just created and test it in CloudShell. The CloudShell environment comes with Docker (and many other tools) pre-installed so there is no need to manually install Docker.

Step 1

If you have a docker id and your-docker-id/ to the docker build and docker run commands in step 3 and 5. You can create one at this step but it is not required.

Go to https://www.docker.com/.

If you don't have a Docker ID, create one now and sign in.

If you already have a Docker ID, just sign in to verify that you know your ID and password.

Step 2

Back in Google Cloud Shell, first make sure you are in the right folder.

cd ~/devops/default

Step 3

Enter the following command to build your container. Where indicated, insert your Docker ID.

docker build -t devops-demo:latest .

Step 4

Wait for the container to finish building. It will take a minute or so.

Step 5

Run the container in Google Cloud Shell with the following command. Where indicated, insert your Docker ID.

docker run -p -d 8080:8080 devops-demo

Step 6

Hopefully, the container is running now. Type the following command which lists all running containers. You should see your container running.

docker ps

Step 7

The container should be running on port 8080. To see if it works, click the Web Preview button and select Preview on port 8080.

The program should open in another browser tab. Verify that it works.

Step 8

To stop the container, we need to know its Container ID. Look at the output from the docker ps command you entered a minute ago. Copy the value under Container ID. Then enter the following command to stop the container.

docker stop $(docker ps | grep -v CONTAINER | grep devops-demo | awk '{ print $1 }')

Step 9

Refresh the browser tab running the program and it should fail because the container was stopped.

You are now going to push your Docker container into the GCP Container Registry.

A container registry is nothing more than a centralized area to put containers to make them available to other machines. Docker maintains a registry called Docker Hub, which you just used. AWS and Azure also maintain container registries.

You could even create your own server and run the registry Docker container and that machine will be a registry.

Step 1

Go back to Google Cloud Shell. Make sure you are in the right folder.

cd ~/devops/default

We'll be re-using the project id a lot, if you are running in cloud shell use the following:

export PROJECT_ID=$DEVSHELL_PROJECT_ID

Step 2

Enter the following single command to use Google Container Builder. In the command, enter your Google Cloud Platform project ID where indicated.

If you are asked to enable the service and retry, enter Yes. Wait for the build to complete successfully.

Step 3

In the Management Console, from the Products and Services menu, go to the Container Registry service. You should see your image in the list. Click the Build History link and you should see your build listed.

Step 4

In Cloud Shell, enter the following to make sure you are in the right folder and add your new Dockerfile to Git.

cd ~/devops/default
git add --all

Step 5

Commit your changes.

git commit -a -m "Added Docker Support"

Step 6

git push origin master

Step 7

Go back to Google Cloud Source Code Repositories and verify your Dockerfile was added to source control.

Before you can use Kubernetes to deploy your application, you need a cluster of machines to deploy them to. The cluster abstracts the details of the underlying machines you deploy to the cluster.

Machines can later be added, removed, or rebooted and containers are automatically distributed or re-distributed across whatever machines are available in the cluster. Machines within a cluster can be set to autoscale up or down to meet demand. Machines can be located in different zones for high availability.

Step 1

Go back to Google Cloud Platform Management Console.

Step 2

From the Products and Services menu, choose Kubernetes Engine. You may be prompted to enable the API.

Step 3

Go back to Google Cloud Shell.

First, make sure you're in the right folder.

cd ~/devops/default

Step 4

Now, enter the following command to create a cluster of machines. Enter your project's ID where indicated.

gcloud container clusters create devops-cluster --zone "us-central1-a" --num-nodes 3 --project=$PROJECT_ID

Step 5

When the cluster is ready, refresh the Kubernetes Engine page in the management console and you should see it.

How many nodes are in your cluster (the cluster size)?

Step 6

A node is really just a virtual machine. From the Products and Services menu, choose Compute Engine and you should see your machines.

Step 1

Back in the Code Editor, expand the devops/default folder. With the default folder selected, choose File|New|File and name the new file kubernetes-config.yaml.

Step 2

Paste the following code into the file you just created. You will have to change the text "your-project-id" to your project ID in the image property where indicated near the bottom of the code below.

Notice you are using the Docker image you created earlier in the lab. Also, you are creating three instances of the container specified in the replicas property.

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: devops-deployment
  labels:
    app: devops
    tier: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: devops
      tier: frontend
  template:
    metadata:
      labels:
        app: devops
        tier: frontend
    spec:
      containers:
      - name: devops-demo
        image: gcr.io/your-project-id/devops-demo:latest
        ports:
        - containerPort: 8080

---

Step 3

Save the file. Now let's run the deployment. First, you need to connect to the cluster you created earlier.

In Cloud Shell, enter the following command:

gcloud container clusters get-credentials devops-cluster --zone us-central1-a --project $PROJECT_ID

Step 4

Enter the following command to run your Kubernetes deployment.

kubectl create -f kubernetes-config.yaml

Step 5

Enter the following command to see if you have any instances.

kubectl get pods

Run the command a few times until all the pods are running.

Step 1

Go back to Google Cloud Shell. Enter the following to see the deployments.

kubectl get deployments

Note the name of the deployment. This was specified in the configuration file.

Step 2

Enter the following to see the details of your deployment.

kubectl describe deployments devops-deployment

Step 3

You have instances, but can't yet access them with a browser because you need a load balancer. Create a load balancer with the following command.

kubectl expose deployment devops-deployment --port=80 --target-port=8080 --type=LoadBalancer

Step 4

You need the IP address of the load balancer. Type the following command to get it. (You might have to run the command a few times waiting for the external IP address to be generated.)

kubectl get services

Step 5

Once you have an external IP address, open a browser tab and make a request to it (on port 80). It should work. If you get an error, wait a little while and try again.

Step 6

Let's scale up to 10 instances.

kubectl scale deployment devops-deployment --replicas=10

After the command completes, type kubectl get pods to see if it worked. You might have to run the command a few times before all 10 are running.

Step 7

Let's scale back to 3 instances.

kubectl scale deployment devops-deployment --replicas=3

After the command completes, type kubectl get pods to see if it worked. You might have to run the command a few times.

Step 8

Let's create a Horizontal Pod Autoscaler (HPA). Type the following command.

kubectl autoscale deployment devops-deployment --min=5 --max=10 --cpu-percent=60

Wait a little while and type kubectl get pods again. The autoscaler will create two more pods. As before, you might have to wait a little while and run the command a couple times.

Step 9

It's just as easy to delete everything, as it is to create everything. Enter the following to delete the deployment. This will delete the deployment but not the cluster. We will reuse the cluster shortly.

kubectl delete hpa devops-deployment
kubectl delete services devops-deployment
kubectl delete -f kubernetes-config.yaml

Wait a minute and then type kubectl get pods and kubectl get services to see if everything got deleted.

Step 1

In the Cloud Shell Code Editor, open your kubernetes-config.yaml file that you created earlier.

Step 2

Below the bottom three dashes, add the following YAML. This creates the load balancer.

apiVersion: v1
kind: Service
metadata:
  name: devops-deployment
  labels:
    app: devops
    tier: frontend
spec:
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: devops
    tier: frontend
 
---

Step 3

Again, below the three dashes add the following YAML. This creates the autoscaler.

apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  name: devops-deployment
spec:
  maxReplicas: 10
  metrics:
  - resource:
      name: cpu
      targetAverageUtilization: 60
    type: Resource
  minReplicas: 5
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: devops-deployment

Step 4

Save your file and then run the following command to execute the deployment.

kubectl create -f kubernetes-config.yaml

Step 5

Wait a minute and then type kubectl get pods and kubectl get services to see your pods and get the IP address of the load balancer.

Step 6

Once it is ready, make a request to the load balancer to see if it works.

Step 7

In Cloud Shell, enter the following to make sure you are in the right folder and add your new file to Git.

cd ~/devops/default
git add --all

Step 8

Commit your changes.

git commit -a -m "Added Kubernetes Support"

Step 9

git push origin master

This is a demonstration of automated building. This is not production quality as other checks and balances would be needed.

Step 1

Go the the Cloud Build product on the left menu and click on Triggers. There should be no triggers at the moment.

Step 2

Click Create Trigger in the central pane. Fill in the following details:

Click Create

Step 3

Test the build

Step 1

We're going to push the new container to the cluster, so edit the main.py file with something inspiring.

Step 2

Go to the Cloud Build page, and select Triggers and edit the trigger. Change the Build configuration from Dockerfile to Cloud Build configuration and click save.

Step 3

Allow cloud build to affect changes on GKE

Open the Cloud Build settings page and enable Kubernetes Engine

Step 4

Add a new file to the ~/devops/default directory called cloudbuild.yaml and put the following content in it:

steps:
- id: 'build'
  name: 'gcr.io/cloud-builders/gcloud'
  args: ["builds","submit","--tag","gcr.io/$PROJECT_ID/devops-demo:$COMMIT_SHA","."]
- id: 'deploy'
  name: 'gcr.io/cloud-builders/kubectl'
  entrypoint: bash
  args:
  - '-c'
  - |
    gcloud container clusters get-credentials --zone "$$CLOUDSDK_COMPUTE_ZONE" "$$CLOUDSDK_CONTAINER_CLUSTER"
    kubectl set image deployment/devops-deployment devops-demo=gcr.io/$PROJECT_ID/devops-demo:$COMMIT_SHA
  env:
    - 'CLOUDSDK_COMPUTE_ZONE=us-central1-a'
    - 'CLOUDSDK_CONTAINER_CLUSTER=devops-cluster'
    - 'GCLOUD_PROJECT=$PROJECT_ID'

Step 5

Push the build

cd ~/devops/default
git add --all
git commit -a -m "Added Kubernetes Support"
git push origin master

Step 6

Check out the new site!!

Step 1

To delete the deployment, enter the following.

kubectl delete -f kubernetes-config.yaml

Step 2

To delete the cluster, run the following.

gcloud container clusters delete devops-cluster --zone us-central1-a

Step 3

Wait a couple minutes and then in the Google Cloud Management console, go to Kubernetes Engine and make sure the cluster is gone or is being deleted.

Go to Compute Engine and make sure the virtual machines are shutting down or gone.

Well done. If you'd like a strech goal, try to implement an automated build and deploy in cloud build.