Wade Stern Projects

Infrastructure

Kubernetes

The BuyPotato pipeline uses AWS EKS and Azure AKS to run the applications. Setting up the Kubernetes cluster is different for each of them and will be covered in the respective pages. The choice to use both Azure and AWS and to use Kubernetes instead of EC2 or Azure VMs was to get experience with the different technologies.

Example Kubernetes deploy file
apiVersion: v1
kind: Service
metadata:
  name: buypotato-frontend
  annotations:
    external-dns.alpha.kubernetes.io/hostname: frontend.prod.wadestern.com
spec:
  selector:
    app: buypotato-frontend
  ports:
  - protocol: "TCP"
    port: 80
    targetPort: 3000
  type: LoadBalancer


---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: buypotato-frontend
spec:
  selector:
    matchLabels:
      app: buypotato-frontend
  replicas: 1
  template:
    metadata:
      labels:
        app: buypotato-frontend
    spec:
      containers:
      - name: buypotato-frontend
        image: dudesm00thie/tastyfrontend
        imagePullPolicy: Always
        ports:
        - containerPort: 3000
        env:
        - name: REACT_APP_BACKEND_PORT
          value: "7200"
        - name: REACT_APP_BACKEND_URL
          value: "backend.prod.wadestern.com"

This deploy YAML file is for the production version of the frontend, but the deploy files for the application are all similar. The service that is created by this file is a load balancer. The load balancer’s IP address is put into a record by the DNS pod so that it is accessible by the external-dns.alpha.kubernetes.io/hostname value. The deploy file currently only creates one replica of the application but this could be easily scaled by increasing the value in the replicas parameter. The frontend uses port 3000 and the backend uses port 7200. To make the application more accessible when it is running, port 3000 is mapped to port 80 (which is the default port) in the deploy file. This allows users to go to frontend.prod.wadestern.com instead of frontend.prod.wadestern.com:3000 to access the application.

AWS

AWS EKS was used for the production step of the pipeline. An IAM user was created to be used by the pipeline. This user was given the necessary permissions to run all of the commands in the pipeline. EKSCTL is the tool that was used to mangae the kubernetes for AWS. It can automatically generate a stack that will provision and configure Kubernetes. This tool simplified the process of creating and taking down the kubernetes clusters in AWS.

Azure

Azure Kubernetes Service was used for the staging portion of the pipeline. The forest step was to create a resource group that the Kubernetes would be created in. Access credentials were generated to be used in the pipeline as secrets. A resource group was created to contain permissions for the DNS manager.

DNS

Whenever a deployment ran in one of the Kubernetes, the load balancers would be created with a new IP address. This means that the pipeline needed a way to dynamically generate DNS records whenever a new service was created so that the application could be accessed by the designated URL. The domain used for the project was purchased through AWS Route 53. Once the zone was created, deploying the DNS manager for AWS was relatively simple. The following deploy YAML file could be run to create a pod in the kubernetes cluster that would check for new services and create new DNS records to ensure that the URLs were mapped to the correct IP addresses.

Example DNS deploy file
# comment out sa if it was previously created
#apiVersion: v1
#kind: ServiceAccount
#metadata:
#  name: external-dns
#  labels:
#    app.kubernetes.io/name: external-dns
# ---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: external-dns
  labels:
    app.kubernetes.io/name: external-dns
rules:
  - apiGroups: [""]
    resources: ["services","endpoints","pods","nodes"]
    verbs: ["get","watch","list"]
  - apiGroups: ["extensions","networking.k8s.io"]
    resources: ["ingresses"]
    verbs: ["get","watch","list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: external-dns-viewer
  labels:
    app.kubernetes.io/name: external-dns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: external-dns
subjects:
  - kind: ServiceAccount
    name: external-dns
    namespace: default # change to desired namespace: externaldns, kube-addons
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: external-dns
  labels:
    app.kubernetes.io/name: external-dns
spec:
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app.kubernetes.io/name: external-dns
  template:
    metadata:
      labels:
        app.kubernetes.io/name: external-dns
    spec:
      serviceAccountName: external-dns
      containers:
        - name: external-dns
          image: registry.k8s.io/external-dns/external-dns:v0.13.5
          args:
            - --source=service
            - --source=ingress
            - --domain-filter=wadestern.com # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones
            - --provider=aws
            - --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization
            - --aws-zone-type=public # only look at public hosted zones (valid values are public, private or no value for both)
            - --registry=txt
            - --txt-owner-id=external-dns
          env:
            - name: AWS_DEFAULT_REGION
              value: us-east-1 # change to region where EKS is installed
     # # Uncommend below if using static credentials
     #        - name: AWS_SHARED_CREDENTIALS_FILE
     #          value: /.aws/credentials
     #      volumeMounts:
     #        - name: aws-credentials
     #          mountPath: /.aws
     #          readOnly: true
     #  volumes:
     #    - name: aws-credentials
     #      secret:
     #        secretName: external-dns

Creating a DNS record manager in Azure followed similar steps, but required a subdomain to be created so that Azure could manage it. The subdomain was placed in the Azure resource group and an NS record was created in AWS route 53 that would point to Azure for any URLs that included staging.wadestern.com. A similar deploy YAML file was created for the Azure Kubernetes cluster so that the routes for staging could be updated as well.