Question : The Nautilus project team needs to setup a HaProxy app on Kubernetes cluster. In the recent meeting they discussed all requirements and how implementation details will proceed. Below you can find more details and proceed accordingly:
Create a namespace haproxy-controller-devops.
Create a ServiceAccount haproxy-service-account-devops under the same namespace.
Create a ClusterRole which should be named as haproxy-cluster-role-devops, rules are placed under location /tmp/rules.yml on jump_host. Use the content of file /tmp/rules.yml to configure ClusterRole.
Create a ClusterRoleBinding which should be named as haproxy-cluster-role-binding-devops under the same namespace. Define roleRef apiGroup should be rbac.authorization.k8s.io, kind should be ClusterRole, name should be haproxy-cluster-role-devops and subjects kind should be ServiceAccount, name should be haproxy-service-account-devops and namespace should be haproxy-controller-devops.
Create a backend deployment which should be named as backend-deployment-devops under the same namespace, labels run should be ingress-default-backend under metadata. Configure spec as replica should be 1, selector's matchLabels run should be ingress-default-backend. Template's labels run under metadata should be ingress-default-backend. The container should named as backend-container-devops, use image gcr.io/google_containers/defaultbackend:1.0 ( use exact name of image as mentioned ) and its containerPort should be 8080.
Create a service for backend which should be named as service-backend-devops under the same namespace, labels run should be ingress-default-backend. Configure spec as selector's run should beingress-default-backend, port should be named as port-backend, protocol should be TCP, port should be 8080 and targetPort should be 8080.
Create a deployment for frontend which should be named haproxy-ingress-devops under the same namespace. Configure spec as replica should be 1, selector's matchLabels should be haproxy-ingress, template's labels run should be haproxy-ingress under metadata. The container name should be ingress-container-devops under the same service account haproxy-service-account-devops, use image haproxytech/kubernetes-ingress, give args as --default-backend-service=haproxy-controller-devops/service-backend-devops, resources requests for cpu should be 500m and for memory should be 50Mi, livenessProbe httpGet path should be /healthz its port should be 1024. The first port name should be http and its containerPort should be 80, second port name should be https and its containerPort should be 443 and third port name should be stat its containerPort should be 1024. Define environment as first env name should be TZ its value should be Etc/UTC, second env name should be POD_NAME its valueFrom fieldRef fieldPath should be metadata.name and third env name should be POD_NAMESPACE its valueFrom fieldRef fieldPath should be metadata.namespace.
Create a service for frontend which should be named as ingress-service-devops under same namespace, labels run should be haproxy-ingress. Configure spec as selectors' run should be haproxy-ingress, type should be NodePort. The first port name should be http, its port should be 80, protocol should be TCP, targetPort should be 80 and nodePort should be 32456. The second port name should be https, its port should be 443, protocol should be TCP, targetPort should be 443 and nodePort should be 32567. The third port name should be stat, its port should be 1024, protocol should be TCP, targetPort should be 1024 and nodePort should be 32678.
Please Note :- Perform the below commands based on your question server, user name & other details might differ . So please read task carefully before executing. All the Best 👍
Solution:
1. At first kubectl utility configure and working from jump server, run below commands
kubectl get namespace
kubectl get pods
kubectl get deploy
2. Create a namespace haproxy-controller-devops
thor@jump_host
/$ kubectl create namespace haproxy-controller-devops namespace/haproxy-controller-devops
created thor@jump_host
/$ thor@jump_host
/$ kubectl get namespace NAME STATUS AGE default Active 135m haproxy-controller-devops Active
7s kube-node-lease Active 135m kube-public Active 135m kube-system Active 135m thor@jump_host
/$ |
https://gitlab.com/nb-tech-support/devops.git
thor@jump_host /$ vi /tmp/haproxy.yml thor@jump_host
/$ cat /tmp/haproxy.yml --- # Service Account
definition , point 2 apiVersion: v1 kind:
ServiceAccount metadata: name: haproxy-service-account-devops namespace: haproxy-controller-devops --- # ClusterRole definition , point 3 apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: haproxy-cluster-role-devops rules: - apiGroups: [""] resources: [ "configmaps", "endpoints", "nodes", "pods", "services", "namespaces", "events", "serviceaccounts", ] verbs: ["get",
"list", "watch"] - apiGroups: ["extensions"] resources: ["ingresses",
"ingresses/status"] verbs: ["get",
"list", "watch", "update"] - apiGroups: [""] resources: ["secrets"] verbs: ["get",
"list", "watch", "create", "patch",
"update"] --- # ClusterRoleBinding definition , point 4 apiVersion:
rbac.authorization.k8s.io/v1 kind:
ClusterRoleBinding metadata: name: haproxy-cluster-role-binding-devops namespace: haproxy-controller-devops roleRef: kind: ClusterRole name: haproxy-cluster-role-devops apiGroup: rbac.authorization.k8s.io subjects: - kind: ServiceAccount name: haproxy-service-account-devops namespace: haproxy-controller-devops --- # Backend Service definition, point 6 apiVersion: v1 kind: Service metadata: name: service-backend-devops namespace: haproxy-controller-devops labels: run: ingress-default-backend spec: selector: run: ingress-default-backend ports: - name: port-backend protocol: TCP port: 8080 targetPort: 8080 --- # Frontend Service definition, point 8 apiVersion: v1 kind: Service metadata: name: ingress-service-devops namespace: haproxy-controller-devops labels: run: haproxy-ingress spec: type: NodePort selector: run: haproxy-ingress ports: - name: http port: 80 protocol: TCP targetPort: 80 nodePort: 32456 - name: https port: 443 protocol: TCP targetPort: 443 nodePort: 32567 - name: stat port: 1024 protocol: TCP targetPort: 1024 nodePort: 32678 --- # Backend Deployment definition, point 5 apiVersion: apps/v1 kind: Deployment metadata: name: backend-deployment-devops namespace: haproxy-controller-devops labels: run: ingress-default-backend spec: replicas: 1 selector: matchLabels: run: ingress-default-backend template: metadata: labels: run: ingress-default-backend spec: containers: - name: backend-container-devops image:
gcr.io/google_containers/defaultbackend:1.0 ports: - containerPort: 8080 --- # Frontend Deployment definition, point 7 apiVersion: apps/v1 kind: Deployment metadata: name: haproxy-ingress-devops namespace: haproxy-controller-devops labels: run: ingress-default-backend spec: replicas: 1 selector: matchLabels: run: haproxy-ingress template: metadata: labels: run: haproxy-ingress spec: serviceAccountName:
haproxy-service-account-devops containers: - name: ingress-container-devops image:
haproxytech/kubernetes-ingress args: - "--default-backend-service=haproxy-controller-devops/service-backend-devops" ports: - name: http containerPort: 80 - name: https containerPort: 443 - name: stat containerPort: 1024 resources: requests: memory: "50Mi" cpu: "500m" livenessProbe: httpGet: path: /healthz port: 1024 env: - name: TZ value: Etc/UTC - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace thor@jump_host /$ |
4. Create pods according to the task
thor@jump_host
/$ kubectl create -f /tmp/haproxy.yml serviceaccount/haproxy-service-account-devops
created clusterrole.rbac.authorization.k8s.io/haproxy-cluster-role-devops
created clusterrolebinding.rbac.authorization.k8s.io/haproxy-cluster-role-binding-devops
created service/service-backend-devops
created service/ingress-service-devops
created deployment.apps/backend-deployment-devops
created deployment.apps/haproxy-ingress-devops
created thor@jump_host
/$ |
5. Wait for deployment & pod running status
thor@jump_host
/$ kubectl get deploy -n haproxy-controller-devops NAME READY UP-TO-DATE AVAILABLE
AGE backend-deployment-devops 1/1
1 1 33s haproxy-ingress-devops 1/1
1 1 33s thor@jump_host
/$ thor@jump_host
/$ kubectl get pods -n haproxy-controller-devops NAME
READY STATUS RESTARTS
AGE backend-deployment-devops-7cff8f4f7-287mx 1/1
Running 0 13s haproxy-ingress-devops-5bd67c6566-hwxkj 1/1
Running 0 13s thor@jump_host
/$ thor@jump_host
/$ kubectl get services -n haproxy-controller-devops NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
AGE ingress-service-devops NodePort
10.101.12.137
<none>
80:32456/TCP,443:32567/TCP,1024:32678/TCP 45s service-backend-devops ClusterIP
10.111.184.85
<none>
8080/TCP 45s thor@jump_host
/$ |
Happy Learning!!!!
0 Comments