创建一个名为deployment-clusterrole的clusterrole,并且对该clusterrole只绑定对Deployment,Daemonset,Statefulset的创建权限
在指定namespace app-team1创建一个名为cicd-token的serviceaccount,并且将上一步创建clusterrole和该serviceaccount绑定
创建clusterrole
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: deployment-clusterrole rules: - apiGroups: [""] resources: ["deployments", "statefulsets", "daemonsets"] verbs: ["create"]
创建sa
kubectl create sa cicd-token -n app-team1
创建rolebinding
apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: deployment-rolebinding namespace: app-team1 subjects: - kind: ServiceAccount name: cicd-token namespace: app-team1 roleRef: kind: ClusterRole name: deployment-clusterrole apiGroup: rbac.authorization.k8s.io
Question 2
将名为ek8s-node-1的node设置为不可用,并且重新调度该node上所有允许的pods
kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready control-plane,master 47h v1.20.1 node-02 Ready worker 47h v1.20.1 node-03 Ready worker 47h v1.20.1
kubectl get po -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-deployment-66b6c48dd5-5b4n9 1/1 Running 0 46h 10.244.2.19 node-03 <none> <none> nginx-deployment-66b6c48dd5-9557j 1/1 Running 0 42h 10.244.2.20 node-03 <none> <none> nginx-deployment-66b6c48dd5-b6lln 1/1 Running 0 42h 10.244.2.21 node-03 <none> <none>
驱逐node-03
kubectl cordon node-03 kubectl drain node-03 --delete-local-data --ignore-daemonsets --force
kubectl get po -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-deployment-66b6c48dd5-fdxfn 1/1 Running 0 116s 10.244.1.37 node-02 <none> <none> nginx-deployment-66b6c48dd5-krrqk 1/1 Running 0 116s 10.244.1.40 node-02 <none> <none> nginx-deployment-66b6c48dd5-pxhwf 1/1 Running 0 116s 10.244.1.36 node-02 <none> <none>
Question 3
现有的Kubernetes集权正在运行的版本是1.18.8,仅将主节点上的所有kubernetes控制面板和组件升级到版本1.19.0 另外,在主节点
上升级kubelet和kubectl
# 将节点标记为不可调度状态 kubectl cordon k8s-master # 驱逐Pod kubectl drain k8s-master--delete-local-data --ignore-daemonsets --force # 升级组件 $ yum install kubeadm=1.19.0-00 kubelet=1.19.0-00 kubectl=1.19.0-00 -y # 重启kubelet服务 $ systemctl restart kubelet # 升级集群其他组件 $ kubeadm upgrade apply v1.19.0
Question 4
首先,为运行在https://127.0.0.1:2379上的现有etcd实力创建快照并且将快照保存到/etc/data/etcd-snapshot.db
然后还原与/var/lib/backup/etcd-snapshot-previoys.db
的现有先前快照 提供了以下TLS证书和密钥,已通过etcdctl连接到服务器
ca证书:/opt/KUIN000601/ca.crt 客户端证书:/opt/KUIN000601/etcd-client.crt 客户端密钥:/opt/KUIN000601/etcd-client.key
#备份:要求备份到指定路径及指定文件名 $ ETCDCTL_API=3 etcdctl --endpoints="https://127.0.0.1:2379" --cacert=/opt/KUIN000601/ca.crt --cert=/opt/KUIN000601/etcd-client.crt --key=/opt/KUIN000601/etcd-client.key snapshot save /etc/data/etcd-snapshot.db #还原:要求使用指定文件进行还原 $ ETCDCTL_API=3 etcdctl --endpoints="https://127.0.0.1:2379" --cacert=/opt/KUIN000601/ca.crt --cert=/opt/KUIN000601/etcd-client.crt --key=/opt/KUIN000601/etcd-client.key snapshot restore /var/lib/backup/etcd-snapshot-previoys.db --data-dir=/var/lib/etcd
Question 5
创建networkPolicy
,针对namespace internal
下的pod,只允许同样namespace
下的pod访问,并且可访问pod的9000端口。
不允许不是来自这个namespace的pod访问。
不允许不是监听9000端口的pod访问。
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: all-port-from-namespace namespace: internal spec: podSelector: matchLabels: {} ingress: - from: - podSelector: {} ports: - port: 80
Question 6
重新配置已经存在的deployment front-end
,为容器nginx
增加port
name: http
port: 80/tcp
创建服务front-end-svc
,暴露名为http
的容器端口
查看已存在的deployment
kubectl get deployment NAME READY UP-TO-DATE AVAILABLE AGE front-end 1/1 1 1 18s
编辑,增加端口配置
kubectl edit deployment front-end spec: containers: - image: nginx:1.14.2 imagePullPolicy: IfNotPresent name: nginx ports: - containerPort: 80 name: http protocol: TCP
暴露服务
kubectl expose deployment front-end --name=front-end-svc --port=80 --target-port=80 --type=NodePort
创建Ingress,将指定的Service
的指定端口暴露出来
集群资源查看
kubectl get svc,po -n ing-internal NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/hi NodePort 10.110.68.143 <none> 5678:31873/TCP 2m17s NAME READY STATUS RESTARTS AGE pod/nginx 1/1 Running 0 21m
创建Ingress
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: pong namespace: ing-internal annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: /hi pathType: Prefix backend: service: name: hi port: number: 5678
访问测试
curl 10.234.2.12/hi hi
Question 8
将指定的deployment
扩展至6个pods
kubectl scale deployment loadbalancer --replicas=6
Question 9
将pod名称为nginx-kusc00401
,pod镜像名称为nginx,部署到标签为disk:spinning
的node节点上
查看node标签
kubectl get nodes --show-labels
创建Pod
apiVersion: v1 kind: Pod metadata: name: nginx-kusc00401 labels: role: nginx-kusc00401 spec: nodeSelector: disk: spinning containers: - name: nginx image: nginx
Question 10
检查有多少node节点是健康状态,其中不包括"NoSchedule",并将结果写入到指定目录中
kubectl describe nodes | grep -i taint Taints: node-role.kubernetes.io/master:NoSchedule Taints: <none> Taints: <none>
echo 2 > /opt/KUSC00402/kusc00402.txt
Question 11
创建一个拥有多个container容器的Pod:nginx+redis+memcached+consul
apiVersion: v1 kind: Pod metadata: name: kucc1 spec: containers: - image: nginx name: nginx - image: redis name: redis - image: memchached name: memcached - image: consul name: consul
Question 12
创建一个名为app-config的PV,
容量为2Gi
访问模式为ReadWriteMany
volume的类型为hostPath
位置为/src/app-config
apiVersion: v1 kind: PersistentVolume metadata: name: app-config labels: type: local spec: capacity: storage: 2Gi accessModes: - ReadWriteMany hostPath: path: "/src/app-config"
Question 13
用指定storageclass创建一个pvc 大小为10M
将这个nginx容器的/var/nginx/html目录使用该pvc挂在出来 将这个pvc的大小从10M更新成70M
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pv-volume spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 10Mi storageClassName: nfs --- apiVersion: v1 kind: Pod metadata: name: web-server spec: containers: - name: nginx image: nginx volumeMounts: - mountPath: "/usr/share/nginx/html" name: pv-volume volumes: - name: pv-volume persistentVolumeClaim: claimName: pv-volume
Question 14
监控 pod foobar的日志并提取错误的unable-access-website
相对于的日志写入到 /opt/KUTR00101/foobar
kubectl logs foobar | grep unable-access-website > /opt/KUTR00101/foobar
Question 15
Question 16
查看Pod标签为name=cpu-user
的CPU使用率并且把cpu使用率最高的pod名称写入/opt/KUTR00401/KUTR00401.txt
文件里
kubectl top pod -l name=cpu-user -A NAMAESPACE NAME CPU MEM delault cpu-user-1 45m 6Mi delault cpu-user-2 38m 6Mi delault cpu-user-3 35m 7Mi delault cpu-user-4 32m 10Mi echo 'cpu-user-1' >>/opt/KUTR00401/KUTR00401.txt
Question 17
名为wk8s-node-0的节点处于NotReady
状态,将其恢复成Ready
状态,并且设置为开机自启
# 连接到NotReady节点 $ ssh wk8s-node-0 获取权限 $ sudo -i # 查看服务是否运行正常 $ systemctl status kubelet #如果服务非正常运行进行恢复 $ systemctl start kubelet #设置开机自启 $ systemctl enable kubelet