Kubernetes 基于ceph rbd生成pv
1、創(chuàng)建ceph-secret這個k8s secret對象,這個secret對象用于k8s volume插件訪問ceph集群,獲取client.admin的keyring值,并用base64編碼,在master1-admin(ceph管理節(jié)點)操作
[root@master1-admin ~]# ceph auth get-key client.admin | base64 QVFDOWF4eGhPM0UzTlJBQUJZZnVCMlZISVJGREFCZHN0UGhMc3c9PQ==?2.創(chuàng)建ceph的secret,在k8s的控制節(jié)點操作:
[root@master ceph]# cat ceph-secret.yaml apiVersion: v1 kind: Secret metadata:name: ceph-secret data:key: QVFDOWF4eGhPM0UzTlJBQUJZZnVCMlZISVJGREFCZHN0UGhMc3c9PQ== [root@master ceph]# kubectl apply -f ceph-secret.yaml secret/ceph-secret created3.回到ceph 管理節(jié)點創(chuàng)建pool池,然后再創(chuàng)建一個塊設(shè)備
[root@master1-admin ~]# ceph osd pool create k8stest 56 pool 'k8stest' created You have new mail in /var/spool/mail/root [root@master1-admin ~]# rbd create rbda -s 1024 -p k8stest [root@master1-admin ~]# rbd feature disable k8stest/rbda object-map fast-diff deep-flatten [root@master1-admin ~]# ceph osd pool ls rbd cephfs_data cephfs_metadata k8srbd1 k8stest?4、創(chuàng)建pv
[root@master ceph]# cat pv.yaml apiVersion: v1 kind: PersistentVolume metadata: name: ceph-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce rbd: monitors: - '192.168.0.5:6789'- '192.168.0.6:6789'- '192.168.0.7:6789' pool: k8stest image: rbda user: admin secretRef: name: ceph-secret fsType: xfs readOnly: false persistentVolumeReclaimPolicy: Recycle[root@master ceph]# vim pv.yaml [root@master ceph]# kubectl apply -f pv.yaml persistentvolume/ceph-pv created[root@master ceph]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM ceph-pv 1Gi RWO Recycle Available [root@master ceph]# cat pvc.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: ceph-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi[root@master ceph]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE ceph-pvc Bound ceph-pv 1Gi RWO 4s[root@master ceph]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE ceph-pv 1Gi RWO Recycle Bound default/ceph-pvc 5h23m [root@master ceph]# cat pod-2.yaml apiVersion: apps/v1 kind: Deployment metadata:name: nginx-deployment spec:selector:matchLabels:app: nginxreplicas: 2 # tells deployment to run 2 pods matching the templatetemplate: # create pods using pod definition in this templatemetadata:labels:app: nginxspec:containers:- name: nginximage: nginximagePullPolicy: IfNotPresentports:- containerPort: 80volumeMounts:- mountPath: "/ceph-data"name: ceph-datavolumes:- name: ceph-datapersistentVolumeClaim:claimName: ceph-pvc[root@master ceph]# kubectl apply -f pod-2.yaml deployment.apps/nginx-deployment created[root@master ceph]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-deployment-d9f89fd7c-v8rxb 1/1 Running 0 12s 10.233.90.179 node1 <none> <none> nginx-deployment-d9f89fd7c-zfwzj 1/1 Running 0 12s 10.233.90.178 node1 <none> <none>通過上面可以發(fā)現(xiàn)pod是可以以ReadWriteOnce共享掛載相同的pvc的?
注意:ceph?rbd塊存儲的特點
- ?ceph rbd塊存儲能在同一個node上跨pod以ReadWriteOnce共享掛載
- ceph rbd塊存儲能在同一個node上同一個pod多個容器中以ReadWriteOnce共享掛載
- ceph rbd塊存儲不能跨node以ReadWriteOnce共享掛載
- 如果一個使用ceph rdb的pod所在的node掛掉,這個pod雖然會被調(diào)度到其它node,但是由于rbd不能跨node多次掛載和掛掉的pod不能自動解綁pv的問題,這個新pod不會正常運行
?
Deployment更新特性:
deployment觸發(fā)更新的時候,它確保至少所需 Pods 75% 處于運行狀態(tài)(最大不可用比例為 25%)。故像一個pod的情況,肯定是新創(chuàng)建一個新的pod,新pod運行正常之后,再關(guān)閉老的pod。
默認(rèn)情況下,它可確保啟動的 Pod 個數(shù)比期望個數(shù)最多多出 25%
問題:
結(jié)合ceph rbd共享掛載的特性和deployment更新的特性,我們發(fā)現(xiàn)原因如下:
由于deployment觸發(fā)更新,為了保證服務(wù)的可用性,deployment要先創(chuàng)建一個pod并運行正常之后,再去刪除老pod。而如果新創(chuàng)建的pod和老pod不在一個node,就會導(dǎo)致此故障。
解決辦法:
1,使用能支持跨node和pod之間掛載的共享存儲,例如cephfs,GlusterFS等
2,給node添加label,只允許deployment所管理的pod調(diào)度到一個固定的node上。(不建議,這個node掛掉的話,服務(wù)就故障了)
總結(jié)
以上是生活随笔為你收集整理的Kubernetes 基于ceph rbd生成pv的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 百度网盘、迅雷下载.torrent种子文
- 下一篇: 这一晚注定属于C罗!回归首秀梅开二度,现