2020九月份CKA考题及答案
注意事项
做题之前应先使用下面命令配置环境
~]$ kubectl config use-context k8s
如果有需要,可以使用下面命令连接到cluster里面的指定node节点上
$ ssh k8s-node-0
#获取sudo权限
$ sudo -i
第一题
设置配置环境
[student@node1] $ kubectl config use-context k8s
context
为部署管道创建一个新的ClusterRole并将其绑定到范围为特定namespace的特定ServiceAccount
task
创建一个名为deployment-clusterrole
且仅允许创建以下资源类型的新ClusterRole
:
- Deployment
- StatefulSet
- DaemonSet
在现有的namespace app-team1
中创建一个名为cicd-token
的新ServiceAccount
限于namesapceapp-team1
,将新的ClusterRoledeployment-clusterrole
绑定到新的ServiceAccountcicd-token
解答
#创建namespace
[root@localhost ~]# kubectl create namespace app-team1
#创建clusterrole
[root@localhost ~]# kubectl create clusterrole deployment-clusterrole --verb=create --resource=deployment,statefulset,daemonset
#创建serviceaccount
[root@localhost ~]# kubectl create sa cicd-token -n app-team1
#创建clusterrolebinding
[root@localhost ~]# kubectl -n app-team1 create rolebinding deployment-rolebinding --clusterrole deployment-clusterrole --serviceaccount app-team1:cicd-token
#验证该token
第二题
配置环境
~]$ kubectl config use-context ek8s
task
set the node named ek8s-node1 as unavailable and reschedule all the pods running on it
解答
#设置ek8s为不可用,我这里以192.168.66.142替代ek8s-node1
[root@localhost ~]# kubectl cordon 192.168.66.142
node/192.168.66.142 cordoned
#驱逐pod
[root@localhost ~]# kubectl drain --delete-local-data=true --ignore-daemonsets=true 192.168.66.142
第三题
配置环境
#mk8s集群里面有1个master一个node
~]$ kubectl config use-context mk8s
task
现有的kubernetes集群正在运行1.18.8。仅将主节点上所有kubernetes控制平面和节点组件升级到版本1.19.0
另外,在主节点上升级kubelet和kubectl
注意:确保在升级之前drain主节点,并在升级之后uncordon主节点。请不要升级工作节点,etcd,container管理器,CNI插件,DNS服务或任何其他组件
解答
参考:https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
#1、升级kubeadm
yum install kubeadm-1.19.0-0
或者
apt update && apt install kubeadm=1.19.0-00
#2、drain主节点
kubectl drain k8s-master1
#3、测试能升级到哪个版本
kubeadm upgrade plan
#4、开始升级
kubeadm upgrade apply v1.19.0
#5、升级主节点的kubelet和kubectl
yum install kubectl-1.19.0-0 kubelet-1.19.0-0
或者
apt update && apt install kubelet=1.19.0-0 或者kubectl=1.19.0-0
systemctl restart kubelet
#6、uncordon主节点
kubectl uncordon k8s-master1
#因为不用升级node节点,所以到这里就结束了
第四题
配置环境
此项目无需更改配置环境
task
1、首先为运行在https://127.0.0.1:2379上的现有的etcd实例创建快照并将快照保存到/data/backup/etcd-snapshot.db
注意:为更定实例创建快照预计几秒钟内完成。如果该操作似乎挂起,则命令可能有问题。用CTRL+C来取消操作,然后重试。
2、然后还原位于/src/data/etcd-snapshot-previous.db的现有先先前快照
注意:提供了以下TLS证书和密钥,以通过etcdctl连接到服务器。
- CA证书:/opt/KUIN00601/ca.crt
- 客户端证书:/opt/KUIN00601/etcd-client.crt
- 客户端密钥:/opt/KUIN00601/etcd-client.key
解答
参考:https://kubernetes.io/zh/docs/tasks/administer-cluster/configure-upgrade-etcd/
#备份
$ mkdir -p /data/backup
$ etcdctl --cacert="/opt/KUIN00601/ca.crt" --cert="/opt/KUIN00601/etcd-client.crt" --key="/opt/KUIN00601/etcd-client.key" snapshot save /data/backup/etcd-snapshot.db
#还原
$ etcdctl --cacert="/etc/kubernetes/pki/etcd/ca.crt" --cert="/etc/kubernetes/pki/etcd/server.crt" --key="/etc/kubernetes/pki/etcd/server.key" --endpoints="https://127.0.0.1:2379" snapshot restore /src/data/etcd-snapshot-previous.db
第五题
配置环境
~]$ kubectl config use-context hk8s
Task(%7)
创建一个名为allow-port-from-namespace
的新NetworkPolicy
,以允许现有namespace internal
中的pods连接到同一namespace中其他pods的8080端口
确保新的NetworkPolicy:
- 不允许对没有在监听8080端口的pods的访问
- 不允许不来自namespace internal的pods访问
解答
k8s.io官网搜索NetworkPolicy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-port-from-namespace
namespace: internal
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- podSelector: {}
ports:
- port: 8080
第六题
配置环境
~]$ kubectl config use-context k8s
Task
Reconfigure the existing deployment front-end
and add a port specification named http
exposing port 80/tcp
of the existing container nginx
.
create a new service named front-end-svc
exposing the container port http
.
configure the new service to also expose the individual pods via a NodePort on the nodes on which they are schedule.
#用nginx镜像创建一个front-end deployment
[root@k8s-master1 ~]# kubectl create deployment front-end --image=nginx:alpine
#暴露容器的80端口,不会写格式可以使用kubectl explain deployment.spec.template.spec.containers.ports看字段
[root@k8s-master1 ~]# kubectl edit deployment front-end
...
spec:
containers:
- image: nginx:alpine
imagePullPolicy: IfNotPresent
name: nginx
ports:
- containerPort: 80
name: http
protocol: TCP
resources: {}
...
#查看deployment的label
[root@k8s-master1 ~]# kubectl get deployments.apps front-end --show-labels --no-headers
front-end 1/1 1 1 94s app=front-end
#创建svc
[root@k8s-master1 ~]# kubectl -n test expose deployment front-end --name=front-end-svc --type=NodePort
service/front-end-svc exposed
#查看nodeport端口并访问测试
[root@k8s-master1 ~]# kubectl get svc -n test
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
front-end-svc NodePort 10.254.26.159 <none> 80:30971/TCP 72s
[root@k8s-master1 ~]# curl -I 127.0.0.1:30971
HTTP/1.1 200 OK
第七题
配置环境
~]$ kubectl config use-context k8s
任务
Create a new nginx ingress resource as follow
- Name:
ping
- Namespace:
ing-internal
- Exposing service
hi
on path/hi
using service port 5678
Note:The availability of service hi
can be checked using the following command,which should return hi
:
curl -kL <INTERNAL_IP>/hi
解答
可以在k8s官网搜索ingress,第一篇文章里面ingress的模版照猫画虎
#创建一个普通的nginx应用,访问它会返回hi
[root@k8s-master1 ~]# kubectl create ns ing-internal
Error from server (AlreadyExists): namespaces "ing-internal" already exists
[root@k8s-master1 ~]# kubectl -n ing-internal create deployment hi --image=nginx:alpine-hi
deployment.apps/hi created
#暴露服务并测试
[root@k8s-master1 ~]# kubectl -n ing-internal expose deployment hi --type=ClusterIP --name="hi" --target-port=80 --port=5678
service/hi exposed
[root@k8s-master1 ~]# kubectl get svc -n ing-internal hi
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hi ClusterIP 10.105.198.75 <none> 5678/TCP 10s
[root@k8s-master1 ~]# curl --noproxy 10.105.198.75 10.105.198.75:5678
hi
#裸机模拟ingress,删掉那个validatingwebhookconfigurations,不然创建ingress会包webhook错误
[root@k8s-master1 ~]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/deploy.yaml
[root@k8s-master1 ~]# kubectl -n ingress-nginx delete validatingwebhookconfigurations.admissionregistration.k8s.io ingress-nginx-admission
#创建ingress
[root@k8s-master1 ~]# cat ingress-hi2.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ping
namespace: ing-internal
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /hi
pathType: Prefix
backend:
serviceName: hi
servicePort: 5678
[root@k8s-master1 ~]# kubectl apply -f ingress-hi2.yaml
#验证是否好了,此时ingress流量走向:user -> (ingress -> svc -> pod) -> svc -> pod
[root@k8s-master1 ~]# kubectl get ingress -n ing-internal
NAME CLASS HOSTS ADDRESS PORTS AGE
ping <none> * 192.168.66.148 80 3m58s
[root@k8s-master1 ~]# kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.99.166.123 <none> 80:30736/TCP,443:31113/TCP 54m
ingress-nginx-controller-admission ClusterIP 10.99.252.83 <none> 443/TCP 54m
[root@k8s-master1 ~]# curl 192.168.66.148:30736/hi
hi
第八题
配置环境
~]$ kubectl config use-context k8s
任务
scale the deployment presentation
to 3 pods
#先创建一个名为presentation的deployment
[root@k8s-master1 ~]# kubectl create deployment presentation --image=nginx:alpine
#调整副本数为3
[root@k8s-master1 ~]# kubectl scale --replicas=3 deployment presentation
deployment.apps/presentation scaled
[root@k8s-master1 ~]# kubectl get deployment presentation
NAME READY UP-TO-DATE AVAILABLE AGE
presentation 3/3 3 3 69s
第九题
配置环境
~]$ kubectl config use-context k8s
任务
按照以下调节调度一个pod:
- 名字:nginx-kusc00401
- image:nginx
- node selector:disk=spinning
解答
技巧:k8s.io官网搜索node-selector,照猫画虎
#挑一台机器打个disk=spinning标签
[root@k8s-master1 ~]# kubectl label nodes k8s-node1 disk=spinning
node/k8s-node1 labeled
[root@k8s-master1 ~]# kubectl get nodes k8s-node1 --show-labels
NAME STATUS ROLES AGE VERSION LABELS
k8s-node1 Ready <none> 2d v1.18.8 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disk=spinning,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node1,kubernetes.io/os=linux
#创建pod
[root@k8s-master1 ~]# cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-kusc00401
labels:
app: nginx-kusc00401
spec:
containers:
- name: nginx
image: nginx
nodeSelector:
disk: spinning
[root@k8s-master1 ~]# kubectl apply -f pod.yaml
pod/nginx-kusc00401 created
#验证
[root@k8s-master1 ~]# kubectl get po -o wide nginx-kusc00401 --show-labels
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
nginx-kusc00401 1/1 Running 0 47s 10.244.1.9 k8s-node1 <none> <none> app=nginx-kusc00401
第十题
配置环境
~]$ kubectl config use-context k8s
任务
检查有多少个节点是ready状态的(有NoSchedule
污点的节点不要),将符合条件的节点数量写入/opt/KUSC00402/kusc00402.txt
[root@k8s-master1 ~]# kubectl get node | grep -i ready
k8s-master1 Ready master 2d v1.18.8
k8s-node1 Ready <none> 2d v1.18.8
k8s-node2 Ready <none> 2d v1.18.8
[root@k8s-master1 ~]# kubectl describe node | grep -i Noschedule
Taints: node-role.kubernetes.io/master:NoSchedule
[root@k8s-master1 ~]# mkdir -p /opt/KUSC00402
[root@k8s-master1 ~]# echo 2 > /opt/KUSC00402/kusc00402.txt
第十一题
配置环境
~]$ kubectl config use-context k8s
任务
create a pod named kucc8
with a single app container for each of the following images running inside(there may be between 1 and 4 images specified)
nginx+redis+memcached+consul
解答
这个应该就是创建一个pod,里面跑四个容器就可以了,应用之间没有调用关系。技巧:k8s.io官网搜索pod,照着模版改就行了
[root@k8s-master1 ~]# cat pod2.yaml
apiVersion: v1
kind: Pod
metadata:
name: kucc8
labels:
app: kucc8
spec:
containers:
- name: nginx
image: nginx
- name: redis
image: redis
- name: memcached
image: memcached
- name: consul
image: consul
[root@k8s-master1 ~]# kubectl apply -f pod2.yaml
#验证pod起来了
[root@k8s-master1 ~]# kubectl get po kucc8
NAME READY STATUS RESTARTS AGE
kucc8 4/4 Running 0 2m32s
第十二题
配置环境
~]$ kubectl config use-context hk8s
任务
create a persistent volume with name app-config
,of capacity 1Gi
and access mode ReadOnlyMany
.The type of volume is hostpath
and its location is /src/app-config
解答
技巧:k8s.io官网搜索hostpath,抄
kubectl explain pv.spec看下有些选择不是requeire就不写,能少些就少写
[root@k8s-master1 ~]# cat pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: app-config
spec:
capacity:
storage: 1Gi
accessModes:
- ReadOnlyMany
hostPath:
path: /src/app-config
[root@k8s-master1 ~]# mkdir -p /src/app-config
[root@k8s-master1 ~]# kubectl apply -f pv.yaml
persistentvolume/app-config created
[root@k8s-master1 ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
app-config 1Gi ROX Retain Available 5s
十三题
配置环境
~]$ kubectl config use-context ok8s
任务(权重7%)
创建一个新的persistentvolumeclaim:
- Name:
pv-volume
- Class:
csi-hostpath-sc
- Capacity:
10Mi
Create a new pod which mounts the PersistentVolumeClaim
as a volume:
- Name:web-nginx
- Image:nginx
- Mount path:/usr/share/nginx/html
configure the new pod to have ReadWriteOnce
access on the volume.
Finally,using kubectl edit
or kubectl
expand the persistentvolumeclaim to of 70Mi and record that change.
解答
1、从sc创建pvc:k8s.io搜索
Dynamic Volume Provisioning
或者动态卷供应2、pod挂载pvc:k8s.io 搜索persistent,找到持久卷文章,抄
$ cat pod3.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-volume
spec:
accessModes:
- ReadWriteOnce
storageClassName: csi-hostpath-sc
resources:
requests:
storage: 10Mi
---
apiVersion: v1
kind: Pod
metadata:
name: web-nginx
spec:
volumes:
- name: web
persistentVolumeClaim:
claimName: pv-volume
containers:
- name: web-nginx
image: nginx
volumeMounts:
- name: web
mountPath: /usr/share/nginx/html
#调整pvc大小
十四题
配置环境
~]$ kubectl config use-context k8s
任务
Monitor the logs of pod bar
and:
- Extract logs lines corresponding to error
unable-to-access-website
- write them to
/opt/KUTR00101/bar
解答
#先创建一个barpod吧
[root@k8s-master1 ~]# cat pod-bar.yaml
apiVersion: v1
kind: Pod
metadata:
name: bar
labels:
app: bar
spec:
containers:
- name: nginx
image: nginx
[root@k8s-master1 ~]# kubectl apply -f pod-bar.yaml
#按照题目要求去过滤日志
[root@k8s-master1 ~]# mkdir -p /opt/KUTR00101
[root@k8s-master1 ~]# kubectl logs bar | grep unable-to-access-website > /opt/KUTR00101/bar
十五题
配置环境
~]$ kubectl config use-context k8s
Context
without changing its existing containers ,an existing pod need to be integrated into Kubernetes's build-in logging architecture(eg kubectl logs
).Adding a streaming sidecar container is a good and common way to accomplish this requirement.
Task(%7)
Add a busybox sidecar container to the existing pod big-corp-app
,the new sidecar container has to run the following command:
/bin/sh -c tail -n+1 /var/log/big-corp-app.log
Use a volume mount named logs
to make the file /var/log/big-corp-app.log
available to the sidecar container.
#note:
Don't modify the existing container.
Don't modify the path of the log file,both containers must access it at `/var/log/big-corp-app.log`
解答
pod内注入sidecar容器,将应用的日志tail到console上,然后
kubectl logs
就可以看到pod的日志了解题技巧:k8s.io搜索sidecar,按照日志架构文章,照猫画虎填就完事了
#创建应用big-corp-app
[root@k8s-master1 ~]# cat pod-big-corp-app.yaml
apiVersion: v1
kind: Pod
metadata:
name: big-corp-app
spec:
containers:
- name: big-corp-app
image: busybox
args:
- /bin/sh
- -c
- >
i=0;
while true;
do
echo "$i: $(date)" >> /var/log/big-corp-app.log;
i=$((i+1));
sleep 1;
done
[root@k8s-master1 ~]# kubectl apply -f pod-big-corp-app.yaml
#修改pod,需要先导出yaml,然后,修改,然后删掉原pod,然后apply
[root@k8s-master1 ~]# kubectl get po big-corp-app -oyaml > big-corp-app2.yaml
[root@k8s-master1 ~]# vim big-corp-app2.yaml
#添加了以下内容
spec:
containers:
- image: busybox
name: big-corp-app
...
volumeMounts:
...
- mountPath: /var/log
name: logs
- name: busybox
image: busybox
args: [/bin/sh, -c, 'tail -n+1 -f /var/log/big-corp-app.log']
volumeMounts:
- mountPath: /var/log
name: logs
volumes:
...
- name: logs
emptyDir: {}
[root@k8s-master1 ~]# kubectl delete po big-corp-app
pod "big-corp-app" deleted
[root@k8s-master1 ~]# kubectl apply -f big-corp-app2.yaml
pod/big-corp-app created
#测试
[root@k8s-master1 ~]# kubectl logs big-corp-app -c busybox -f
十六题
配置环境
~]$ kubectl config use-context k8s
Task
From the pod label name=cpu-loader
,find pods running high CPU workloads and write the name of the pod consuming most CPU to the file /opt/KUTR00401/KUTR0401.txt
(which already exists).
解答
[root@k8s-master1 ~]# kubectl top pod --selector="name=cpu-loader" --all-namespaces
#找到cpu高的pod,写入/opt/KUTR00401/KUTR0401.txt
十七题
配置环境
~]$ kubectl config use-context k8s
任务
有个worker节点NotReady了,请把它变成Ready状态
解答
#网上答案都是kubelet没起来
systemctl start kubelet
systemctl enable kubelet
考试注意事项
-
1、可以要求考官中文沟通,题目可以中英文切换
-
2、一定要用记事本,将复制的内容复制到记事本上,再去命令行执行
-
3、考试时间尽量选上午
-
4、设置kubectl自动补全
source /usr/share/bash-completion/bash_completion echo 'source <(kubectl completion bash)' >>~/.bashrc source ~/.bashrc
-
5、能命令行操作,绝不写yaml
-
6、能复制粘贴绝不手写
-
7、切换k8s集群环境之后,先
kubectl get node
看下是不是切对了再操作