mkdir saasbase-project
cd saasbase-project
mkdir saasbase-fe
mkdir saasbase-be
We've written a detailed guide on how to build and dockerize your Frontend React app here. Place the project in the saasbase-fe
folder.
docker login
docker build -t sssaini/saasbase-fe .
docker push sssaini/saasbase-fe:0.1
We've written a detailed guide on how to build and dockerize your Backend Node.js app here. Place the project in the saasbase-be
folder.
docker login
docker build -t sssaini/saasbase-be .
docker push sssaini/saasbase-be:0.1
Once you have deployed to Production, you should increase the node count to make the deployment more resilient.
4. Finalize the cluster by giving it a name. I called mine -
saasbase-cluster
.
Congratulations! Your cluster is now created.
kubectl version
Move it to the correct folder by running:
mv saasbase-cluster-kubeconfig.yaml ~/.kube/config
You should be connected to the Digital Ocean cluster. Verify by running:
➜ ~ kubectl get nodes
NAME STATUS ROLES AGE VERSION
pool-h5wx2v1ut-cudd5 Ready <none> 57m v1.22.7
Create a file called fe.yaml
in the saasbase-project
folder. This will configure how our frontend deployment.
Notice that we're using the LoadBalancer type in the Service. This lets Digital Ocean know that we want an external IP for this service so we can view the app. In the next step, we will set up a custom domain that can be used to reach the app instead of an IP address.
apiVersion: apps/v1
kind: Deployment
metadata:
name: fe-deploy
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: saasbase-fe
template:
metadata:
labels:
app.kubernetes.io/name: saasbase-fe
spec:
containers:
- name: frontend
image: docker.io/sssaini/saasbase-fe:0.1
kind: Service
apiVersion: v1
metadata:
name: fe-service
spec:
selector:
app.kubernetes.io/name: saasbase-fe
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 3000
Deploy the Kube configuration by running:
➜ ~ kubectl apply -f fe.yaml
deployment.apps/fe-deploy created
service/fe-service created
Once running, we can get the IP address of the service by:
➜ ~ kubectl get pods
NAME READY STATUS RESTARTS AGE
fe-deploy-8448fb4b97-6tgfj 1/1 Running 0 38s
Access the service by:
kubectl get services
The External IP takes about 5 mins to provision. Once assigned, you can view your application by opening the IP in your browser. For me it would be: http://143.198.246.142
Brilliant! I can see my React app running.
We can do exactly the same for our backend deployment. Create a file called
be.yaml
at the root level.
apiVersion: apps/v1
kind: Deployment
metadata:
name: be-deploy
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: saasbase-be
template:
metadata:
labels:
app.kubernetes.io/name: saasbase-be
spec:
containers:
- name: backend
image: docker.io/sssaini/saasbase-be:0.1
kind: Service
apiVersion: v1
metadata:
name: be-service
spec:
selector:
app.kubernetes.io/name: saasbase-be
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 7001
Apply the deployment by running:
kubectl apply -f be.yaml
deployment.apps/saasbase-be-deployment created
service/be-service created
Access the service by:
➜ ~ kubectl get pods
NAME READY STATUS RESTARTS AGE
fe-deploy-8448fb4b97-kfzg9 1/1 Running 0 16m
be-deploy-5fcb68649d-vj9sp 1/1 Running 0 7m8s
➜ ~ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
be-service LoadBalancer 10.245.148.197 146.190.0.10 80:30791/TCP 6m2s
fe-service LoadBalancer 10.245.5.13 143.198.246.142 80:31387/TCP 15m
kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 91m
Same as before, I can now access by backend by going to the External IP as such: http://146.190.0.10
Using the External IP works but it's not very user-friendly. We can buy a custom domain from Namecheap to access our services. I bought the domain: bearbill.com
.
After buying the domain, set up DNS to point to Digital Ocean. Add the following custom DNS nameservers:
ns1.digitalocean.com
ns2.digitalocean.com
ns3.digitalocean.com
We can verify that the Controller is successfully running with:
➜ ~ kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginx
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx ingress-nginx-controller-664d8d6d67-kvpkz 1/1 Running 0 85m
ingress-nginx ingress-nginx-controller-664d8d6d67-vkmnk 1/1 Running 0 85m
➜ ~ kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.245.223.54 64.225.91.107 80:32152/TCP,443:31302/TCP 84m
ingress-nginx-controller-admission ClusterIP 10.245.98.130 <none> 443/TCP 84m
ingress-nginx-controller-metrics ClusterIP 10.245.188.111 <none> 10254/TCP 84m
bearbill.com
api.bearbill.com
Since we are going to be using the custom domain to access the services, we can update the deployed frontend and backend services to not provision an external IP.
This can be done by simply commenting out type: LoadBalancer
in the service.
Here's what my fe.yaml
looks like:
apiVersion: apps/v1
kind: Deployment
metadata:
name: fe-deploy
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: saasbase-fe
template:
metadata:
labels:
app.kubernetes.io/name: saasbase-fe
spec:
containers:
- name: frontend
image: docker.io/sssaini/saasbase-fe:0.1
kind: Service
apiVersion: v1
metadata:
name: fe-service
spec:
selector:
app.kubernetes.io/name: saasbase-fe
# type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 3000
Here's what my be.yaml
looks like:
apiVersion: apps/v1
kind: Deployment
metadata:
name: be-deploy
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: saasbase-be
template:
metadata:
labels:
app.kubernetes.io/name: saasbase-be
spec:
containers:
- name: backend
image: docker.io/sssaini/saasbase-be:0.1
kind: Service
apiVersion: v1
metadata:
name: be-service
spec:
selector:
app.kubernetes.io/name: saasbase-be
# type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 7001
Apply this change by running:
kubectl apply -f fe.yaml
kubectl apply -f be.yaml
Perfect. Now there shouldn't be an External IP when we run:
kubectl get services
To make the apps accessible with custom domains, we need to set up NGINX so that the traffic can be correctly routed into their respective containers.
Create a deploy.yaml
:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-echo
namespace: default
spec:
tls:
- hosts:
- bearbill.com
- api.bearbill.com
rules:
- host: bearbill.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: fe-service
port:
number: 3000
- host: api.bearbill.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: be-service
port:
number: 7001
ingressClassName: nginx
Deploy again with:
kubectl apply -f deploy.yaml
We can make sure that the ingress service was created by:
➜ ~ kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-echo nginx bearbill.com,api.bearbill.com 64.225.91.107 80, 443 25m
The backend should now be accessible at: http://api.bearbill.com
. It should be accessible at: http://bearbill.com
.
Notice that the domain is not secured by SSL which will make your browser complain. Switch to incognito mode and it should let you through.
We will work on making it accessible by HTTPS next.
You can verify that it's deployed with:
➜ ~ helm ls -n cert-manager
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
cert-manager cert-manager 1 2022-03-30 16:37:42.465767949 +0000 UTC deployed cert-manager-v1.6.1 v1.6.1
➜ ~ kubectl get pods -n cert-manager
NAME READY STATUS RESTARTS AGE
cert-manager-7645bbbcc9-2nr7w 1/1 Running 0 35s
cert-manager-cainjector-5bcf77b697-km828 1/1 Running 0 35s
cert-manager-webhook-9cb88bd6d-swmfw 1/1 Running 0 35s
We can now update our deploy.yaml
. Make sure to add your email address.
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: letsencrypt-nginx
namespace: default
spec:
# ACME issuer configuration
# `email` - the email address to be associated with the ACME account (make sure it's a valid one)
# `server` - the URL used to access the ACME server’s directory endpoint
# `privateKeySecretRef` - Kubernetes Secret to store the automatically generated ACME account private key
acme:
email: <YOUR_EMAIL_ADDRESS>
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-nginx-private-key
solvers:
# Use the HTTP-01 challenge provider
- http01:
ingress:
class: nginx
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-echo
namespace: default
annotations:
cert-manager.io/issuer: letsencrypt-nginx
spec:
tls:
- hosts:
- bearbill.com
- api.bearbill.com
secretName: letsencrypt-nginx
rules:
- host: bearbill.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: fe-service
port:
number: 3000
- host: api.bearbill.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: be-service
port:
number: 7001
ingressClassName: nginx
Deploy using:
kubectl apply -f deploy.yaml
You can confirm when the SSL certificate has been successfully issued by:
kubectl get certificates
NAME READY SECRET AGE
letsencrypt-nginx True letsencrypt-nginx 29m
If for some reason it is showing FALSE
, debug using kubectl get events
.
Wait a few minutes and try to access https://bearbill.com
and https://api.bearbill.com
. You should see a fancy lock icon next to the URL signifying that the website is indeed secure.
You have not installed the Certbot add-on to the Kubernetes cluster.
PENDING
stateDebug using kubectl get events
. If it mentions - Not enough resources
, you might need to resize the cluster to the next available size.
kubectl get pods
kubectl describe pod <pod_name>
kubectl get events