k8s进阶篇-云原生存储ceph
第一章 Rook安裝
rook的版本大于1.3,不要使用目錄創(chuàng)建集群,要使用單獨的裸盤進(jìn)行創(chuàng)建,也就是創(chuàng)建一個新的磁盤,掛載到宿主機(jī),不進(jìn)行格式化,直接使用即可。對于的磁盤節(jié)點配置如下:
做這個實驗需要高配置,每個節(jié)點配置不能低于2核4G
k8s 1.19以上版本,快照功能需要單獨安裝snapshot控制器
下載Rook安裝文件
下載指定版本Rook
git clone --single-branch --branch v1.6.3 https://github.com/rook/rook.git配置更改
cd rook/cluster/examples/kubernetes/ceph修改Rook CSI鏡像地址,原本的地址可能是gcr的鏡像,但是gcr的鏡像無法被國內(nèi)訪問,所以需要同步gcr的鏡像到阿里云鏡像倉庫,文檔版本已經(jīng)完成同步,可以直接修改如下:
vim operator.yaml將
改為:
ROOK_CSI_REGISTRAR_IMAGE: "registry.cn-beijing.aliyuncs.com/dotbalo/csi-node-driver-registrar:v2.0.1" ROOK_CSI_RESIZER_IMAGE: "registry.cn-beijing.aliyuncs.com/dotbalo/csi-resizer:v1.0.1" ROOK_CSI_PROVISIONER_IMAGE: "registry.cn-beijing.aliyuncs.com/dotbalo/csi-provisioner:v2.0.4" ROOK_CSI_SNAPSHOTTER_IMAGE: "registry.cn-beijing.aliyuncs.com/dotbalo/csi-snapshotter:v4.0.0" ROOK_CSI_ATTACHER_IMAGE: "registry.cn-beijing.aliyuncs.com/dotbalo/csi-attacher:v3.0.2"如果是其他版本,需要自行同步,同步方法可以在網(wǎng)上找到相關(guān)文章。可以參考https://blog.csdn.net/sinat_35543900/article/details/103290782
還是operator文件,新版本rook默認(rèn)關(guān)閉了自動發(fā)現(xiàn)容器的部署,可以找到ROOK_ENABLE_DISCOVERY_DAEMON改成true即可:
部署rook
1.6.3版本的部署步驟如下:
cd cluster/examples/kubernetes/ceph kubectl create -f crds.yaml -f common.yaml -f operator.yaml等待operator容器和discover容器啟動
[root@k8s-master01 ceph]# kubectl -n rook-ceph get pod NAME READY STATUS RESTARTS AGE rook-ceph-operator-65965c66b5-q4529 1/1 Running 0 7m43s rook-discover-7bjbn 1/1 Running 0 5m31s rook-discover-dv4bn 1/1 Running 0 5m31s rook-discover-gbln2 1/1 Running 0 5m31s rook-discover-hlqrg 1/1 Running 0 5m31s rook-discover-np7pb 1/1 Running 0 5m31s第二章 創(chuàng)建ceph集群
配置更改
主要更改的是osd節(jié)點所在的位置:
cd cluster/examples/kubernetes/ceph vim cluster.yaml新版必須采用裸盤,即未格式化的磁盤。其中k8s-master02 k8s-node01 node02有新加的一個磁盤,可以通過lsblk -f查看新添加的磁盤名稱。建議最少三個節(jié)點,否則后面的試驗可能會出現(xiàn)問題
創(chuàng)建Ceph集群
kubectl create -f cluster.yaml創(chuàng)建完成后,可以查看pod的狀態(tài):
[root@k8s-master01 ceph]# kubectl -n rook-ceph get pod NAME READY STATUS RESTARTS AGE csi-cephfsplugin-cp2s5 3/3 Running 0 27m csi-cephfsplugin-h4wb5 3/3 Running 0 27m csi-cephfsplugin-jznvn 3/3 Running 0 27m csi-cephfsplugin-k9q28 3/3 Running 0 27m csi-cephfsplugin-provisioner-574976878-f5n7c 6/6 Running 0 27m csi-cephfsplugin-provisioner-574976878-p7vcx 6/6 Running 0 27m csi-cephfsplugin-z2645 3/3 Running 0 27m csi-rbdplugin-7fzmv 3/3 Running 0 27m csi-rbdplugin-7xsrp 3/3 Running 0 27m csi-rbdplugin-b9lqh 3/3 Running 0 27m csi-rbdplugin-dx2jk 3/3 Running 0 27m csi-rbdplugin-provisioner-884fb5b55-dm5dl 6/6 Running 0 27m csi-rbdplugin-provisioner-884fb5b55-z4p49 6/6 Running 0 27m csi-rbdplugin-x4snv 3/3 Running 0 27m rook-ceph-crashcollector-k8s-master02-f9db7d85d-lltdp 1/1 Running 0 17m rook-ceph-crashcollector-k8s-node01-747795874c-5cdz6 1/1 Running 0 17m rook-ceph-crashcollector-k8s-node02-5d4867cfb8-n74wn 1/1 Running 0 17m rook-ceph-mgr-a-77bf97745c-4hqpp 1/1 Running 0 17m rook-ceph-mon-a-6d4444d6bf-jvlxw 1/1 Running 0 19m rook-ceph-mon-b-68b66fd889-x28bf 1/1 Running 0 17m rook-ceph-mon-c-54bb69686-v8ftf 1/1 Running 0 17m rook-ceph-operator-65965c66b5-q4529 1/1 Running 0 50m rook-ceph-osd-0-667c867b46-m8nnj 1/1 Running 0 17m rook-ceph-osd-1-56784d575b-vm8mc 1/1 Running 0 17m rook-ceph-osd-2-74f856bb8c-s2r69 1/1 Running 0 17m rook-ceph-osd-prepare-k8s-master02-nf7qn 0/1 Completed 0 16m rook-ceph-osd-prepare-k8s-node01-jkm6g 0/1 Completed 0 16m rook-ceph-osd-prepare-k8s-node02-xr4rt 0/1 Completed 0 16m rook-discover-7bjbn 1/1 Running 0 48m rook-discover-dv4bn 1/1 Running 0 48m rook-discover-gbln2 1/1 Running 0 48m rook-discover-hlqrg 1/1 Running 0 48m rook-discover-np7pb 1/1 Running 0 48m需要注意的是,osd-x的容器必須是存在的,且是正常的。如果上述Pod均正常,則認(rèn)為集群安裝成功。
更多配置:https://rook.io/docs/rook/v1.6/ceph-cluster-crd.html
安裝ceph snapshot控制器
k8s 1.19版本以上需要單獨安裝snapshot控制器,才能完成pvc的快照功能,所以在此提前安裝下
snapshot控制器的部署在集群安裝時的k8s-ha-install項目中,需要切換到1.20.x分支:
創(chuàng)建snapshot controller:
kubectl create -f snapshotter/ -n kube-system [root@k8s-master01 k8s-ha-install]# kubectl get po -n kube-system -l app=snapshot-controller NAME READY STATUS RESTARTS AGE snapshot-controller-0 1/1 Running 0 51s具體文檔:https://rook.io/docs/rook/v1.6/ceph-csi-snapshot.html
第三章 安裝ceph客戶端工具
[root@k8s-master01 k8s-ha-install]# cd /rook/cluster/examples/kubernetes/ceph[root@k8s-master01 ceph]# kubectl create -f toolbox.yaml -n rook-ceph deployment.apps/rook-ceph-tools created待容器Running后,即可執(zhí)行相關(guān)命令
[root@k8s-master01 ceph]# kubectl get po -n rook-ceph -l app=rook-ceph-tools NAME READY STATUS RESTARTS AGE rook-ceph-tools-fc5f9586c-wq72t 1/1 Running 0 38s[root@k8s-master01 ceph]# kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash [root@rook-ceph-tools-fc5f9586c-wq72t /]# ceph statuscluster:id: b23b3611-f332-40a7-bd4b-f555458ce160health: HEALTH_WARNmons are allowing insecure global_id reclaimservices:mon: 3 daemons, quorum a,b,c (age 7m)mgr: a(active, since 7m)osd: 3 osds: 3 up (since 7m), 3 in (since 10h)data:pools: 1 pools, 1 pgsobjects: 0 objects, 0 Busage: 3.0 GiB used, 57 GiB / 60 GiB availpgs: 1 active+clean[root@rook-ceph-tools-fc5f9586c-wq72t /]# ceph osd status ID HOST USED AVAIL WR OPS WR DATA RD OPS RD DATA STATE0 k8s-node01 1027M 18.9G 0 0 0 0 exists,up1 k8s-node02 1027M 18.9G 0 0 0 0 exists,up2 k8s-master02 1027M 18.9G 0 0 0 0 exists,up[root@rook-ceph-tools-fc5f9586c-wq72t /]# ceph df --- RAW STORAGE --- CLASS SIZE AVAIL USED RAW USED %RAW USED hdd 60 GiB 57 GiB 11 MiB 3.0 GiB 5.02 TOTAL 60 GiB 57 GiB 11 MiB 3.0 GiB 5.02--- POOLS --- POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL device_health_metrics 1 1 0 B 0 0 B 0 18 GiB第四章 Ceph dashboard
參考官網(wǎng)
The simplest way to expose the service in minikube or similar environment is using the NodePort to open a port on the VM that can be accessed by the host. To create a service with the NodePort, save this yaml as dashboard-external-https.yaml.
The simplest way to expose the service in minikube or similar environment is using the NodePort to open a port on the VM that can be accessed by the host. To create a service with the NodePort, save this yaml as dashboard-external-https.yaml.
在minikube或類似環(huán)境中公開服務(wù)的最簡單方法是使用NodePort在VM上打開主機(jī)可以訪問的端口。要使用NodePort創(chuàng)建服務(wù),請將此yaml保存為dashboard-external-https.yaml。
創(chuàng)建服務(wù)
kubectl create -f dashboard-external-https.yaml查看服務(wù)
[root@k8s-master01 ceph]# kubectl -n rook-ceph get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE csi-cephfsplugin-metrics ClusterIP 192.168.114.108 <none> 8080/TCP,8081/TCP 11h csi-rbdplugin-metrics ClusterIP 192.168.214.223 <none> 8080/TCP,8081/TCP 11h rook-ceph-mgr ClusterIP 192.168.5.9 <none> 9283/TCP 11h rook-ceph-mgr-dashboard ClusterIP 192.168.144.39 <none> 8443/TCP 11h rook-ceph-mgr-dashboard-external-https NodePort 192.168.195.164 <none> 8443:31250/TCP 8m53s rook-ceph-mon-a ClusterIP 192.168.71.28 <none> 6789/TCP,3300/TCP 11h rook-ceph-mon-b ClusterIP 192.168.137.117 <none> 6789/TCP,3300/TCP 11h rook-ceph-mon-c ClusterIP 192.168.245.155 <none> 6789/TCP,3300/TCP 11h訪問
賬號:admin
密碼:通過命令獲取
通過安裝ceph的任意節(jié)點ip+端口號進(jìn)行訪問
警告解決:https://docs.ceph.com/en/octopus/rados/operations/health-checks/
ceph -s出現(xiàn)mon is allowing insecure global_id reclaim的解決辦法
需要禁用掉不安全的模式,使用如下命令
ceph config set mon auth_allow_insecure_global_id_reclaim false第五章 ceph塊存儲的使用
塊存儲一般用于一個Pod掛載一塊存儲使用,相當(dāng)于一個服務(wù)器新掛了一個盤,只給一個應(yīng)用使用。
參考文檔
創(chuàng)建StorageClass和ceph的存儲池
[root@k8s-master01 ceph]# pwd /rook/cluster/examples/kubernetes/ceph[root@k8s-master01 ceph]# vim csi/rbd/storageclass.yaml因為是試驗環(huán)境,所以將副本數(shù)設(shè)置成了2(不能設(shè)置為1),生產(chǎn)環(huán)境最少為3,且要小于等于osd的數(shù)量
創(chuàng)建StorageClass和存儲池:
[root@k8s-master01 ceph]# kubectl create -f csi/rbd/storageclass.yaml -n rook-ceph cephblockpool.ceph.rook.io/replicapool created storageclass.storage.k8s.io/rook-ceph-block created查看創(chuàng)建的cephblockpool和storageClass
StorageClass沒有namespace隔離性
[root@k8s-master01 ceph]# kubectl get cephblockpool -n rook-ceph NAME AGE replicapool 57s[root@k8s-master01 ceph]# kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rook-ceph-block rook-ceph.rbd.csi.ceph.com Delete Immediate true 74s此時可以在ceph dashboard查看到改Pool,如果沒有顯示說明沒有創(chuàng)建成功
掛載測試
創(chuàng)建一個MySQL服務(wù)
[root@k8s-master01 ceph]# cd /rook/cluster/examples/kubernetes[root@k8s-master01 kubernetes]# kubectl create -f mysql.yaml service/wordpress-mysql created persistentvolumeclaim/mysql-pv-claim created deployment.apps/wordpress-mysql created該文件有一段pvc的配置
[root@k8s-master01 kubernetes]# cat mysql.yaml apiVersion: v1 kind: Service metadata:name: wordpress-mysqllabels:app: wordpress spec:ports:- port: 3306selector:app: wordpresstier: mysqlclusterIP: None --- apiVersion: v1 kind: PersistentVolumeClaim metadata:name: mysql-pv-claimlabels:app: wordpress spec:storageClassName: rook-ceph-blockaccessModes:- ReadWriteOnceresources:requests:storage: 20Gi --- apiVersion: apps/v1 kind: Deployment metadata:name: wordpress-mysqllabels:app: wordpresstier: mysql spec:selector:matchLabels:app: wordpresstier: mysqlstrategy:type: Recreatetemplate:metadata:labels:app: wordpresstier: mysqlspec:containers:- image: mysql:5.6name: mysqlenv:- name: MYSQL_ROOT_PASSWORDvalue: changemeports:- containerPort: 3306name: mysqlvolumeMounts:- name: mysql-persistent-storagemountPath: /var/lib/mysqlvolumes:- name: mysql-persistent-storagepersistentVolumeClaim:claimName: mysql-pv-claim
pvc會連接剛才創(chuàng)建的storageClass,然后動態(tài)創(chuàng)建pv,然后連接到ceph創(chuàng)建對應(yīng)的存儲,
之后創(chuàng)建pvc只需要指定storageClassName為剛才創(chuàng)建的StorageClass名稱即可連接到rook的ceph。
如果是statefulset,只需要將volumeTemplateClaim里面的Claim名稱改為StorageClass名稱即可動態(tài)創(chuàng)建Pod
其中MySQL deployment的volumes配置掛載了該pvc:
claimName為pvc的名稱
因為MySQL的數(shù)據(jù)不能多個MySQL實例連接同一個存儲,所以一般只能用塊存儲。相當(dāng)于新加了一塊盤給MySQL使用。
創(chuàng)建完成后可以查看創(chuàng)建的pvc和pv
[root@k8s-master01 kubernetes]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql-pv-claim Bound pvc-7d7668d8-8d51-4456-aa2e-bedc18c251fd 20Gi RWO rook-ceph-block 39m[root@k8s-master01 kubernetes]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-7d7668d8-8d51-4456-aa2e-bedc18c251fd 20Gi RWO Delete Bound default/mysql-pv-claim rook-ceph-block 39m此時在ceph dashboard上面也可以查看到對應(yīng)的image
StatefulSet · volumeClaimTemplates
apiVersion: v1 kind: Service metadata:name: nginxlabels:app: nginx spec:ports:- port: 80name: webclusterIP: Noneselector:app: nginx --- apiVersion: apps/v1 kind: StatefulSet metadata:name: web spec:selector:matchLabels:app: nginx # has to match .spec.template.metadata.labelsserviceName: "nginx"replicas: 3 # by default is 1template:metadata:labels:app: nginx # has to match .spec.selector.matchLabelsspec:terminationGracePeriodSeconds: 10containers:- name: nginximage: nginx ports:- containerPort: 80name: webvolumeMounts:- name: wwwmountPath: /usr/share/nginx/htmlvolumeClaimTemplates:- metadata:name: wwwspec:accessModes: [ "ReadWriteOnce" ]storageClassName: "rook-ceph-block"resources:requests:storage: 1Gi第六章 共享文件系統(tǒng)的使用
共享文件系統(tǒng)一般用于多個Pod共享一個存儲
官方文檔
創(chuàng)建共享類型的文件系統(tǒng)
[root@k8s-master01 kubernetes]# cd /rook/cluster/examples/kubernetes/ceph[root@k8s-master01 ceph]# cat filesystem.yaml ################################################################################################################# # Create a filesystem with settings with replication enabled for a production environment. # A minimum of 3 OSDs on different nodes are required in this example. # kubectl create -f filesystem.yaml #################################################################################################################apiVersion: ceph.rook.io/v1 kind: CephFilesystem metadata:name: myfsnamespace: rook-ceph # namespace:cluster spec:# The metadata pool spec. Must use replication.metadataPool:replicated:size: 3requireSafeReplicaSize: trueparameters:# Inline compression mode for the data pool# Further reference: https://docs.ceph.com/docs/nautilus/rados/configuration/bluestore-config-ref/#inline-compressioncompression_mode:none# gives a hint (%) to Ceph in terms of expected consumption of the total cluster capacity of a given pool# for more info: https://docs.ceph.com/docs/master/rados/operations/placement-groups/#specifying-expected-pool-size#target_size_ratio: ".5"# The list of data pool specs. Can use replication or erasure coding.dataPools:- failureDomain: hostreplicated:size: 3# Disallow setting pool with replica 1, this could lead to data loss without recovery.# Make sure you're *ABSOLUTELY CERTAIN* that is what you wantrequireSafeReplicaSize: trueparameters:# Inline compression mode for the data pool# Further reference: https://docs.ceph.com/docs/nautilus/rados/configuration/bluestore-config-ref/#inline-compressioncompression_mode:none# gives a hint (%) to Ceph in terms of expected consumption of the total cluster capacity of a given pool# for more info: https://docs.ceph.com/docs/master/rados/operations/placement-groups/#specifying-expected-pool-size#target_size_ratio: ".5"# Whether to preserve filesystem after CephFilesystem CRD deletionpreserveFilesystemOnDelete: true# The metadata service (mds) configurationmetadataServer:# The number of active MDS instancesactiveCount: 1# Whether each active MDS instance will have an active standby with a warm metadata cache for faster failover.# If false, standbys will be available, but will not have a warm cache.activeStandby: true# The affinity rules to apply to the mds deploymentplacement:# nodeAffinity:# requiredDuringSchedulingIgnoredDuringExecution:# nodeSelectorTerms:# - matchExpressions:# - key: role# operator: In# values:# - mds-node# topologySpreadConstraints:# tolerations:# - key: mds-node# operator: Exists# podAffinity:podAntiAffinity:requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchExpressions:- key: appoperator: Invalues:- rook-ceph-mds# topologyKey: kubernetes.io/hostname will place MDS across different hoststopologyKey: kubernetes.io/hostnamepreferredDuringSchedulingIgnoredDuringExecution:- weight: 100podAffinityTerm:labelSelector:matchExpressions:- key: appoperator: Invalues:- rook-ceph-mds# topologyKey: */zone can be used to spread MDS across different AZ# Use <topologyKey: failure-domain.beta.kubernetes.io/zone> in k8s cluster if your cluster is v1.16 or lower# Use <topologyKey: topology.kubernetes.io/zone> in k8s cluster is v1.17 or uppertopologyKey: topology.kubernetes.io/zone# A key/value list of annotationsannotations:# key: value# A key/value list of labelslabels:# key: valueresources:# The requests and limits set here, allow the filesystem MDS Pod(s) to use half of one CPU core and 1 gigabyte of memory# limits:# cpu: "500m"# memory: "1024Mi"# requests:# cpu: "500m"# memory: "1024Mi"# priorityClassName: my-priority-classmirroring:enabled: false[root@k8s-master01 ceph]# kubectl create -f filesystem.yaml cephfilesystem.ceph.rook.io/myfs created創(chuàng)建完成后會啟動mds容器,需要等待啟動后才可進(jìn)行創(chuàng)建pv
[root@k8s-master01 ceph]# kubectl -n rook-ceph get pod -l app=rook-ceph-mds NAME READY STATUS RESTARTS AGE rook-ceph-mds-myfs-a-5c4fbfb9bd-xfhpw 1/1 Running 0 33s rook-ceph-mds-myfs-b-678b4976d6-d8f9p 1/1 Running 0 32s也可以在ceph dashboard上面查看狀態(tài)
創(chuàng)建共享類型文件系統(tǒng)的StorageClass
[root@k8s-master01 ceph]# cd /rook/cluster/examples/kubernetes/ceph/csi/cephfs[root@k8s-master01 cephfs]# cat storageclass.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata:name: rook-cephfs provisioner: rook-ceph.cephfs.csi.ceph.com # driver:namespace:operator parameters:# clusterID is the namespace where operator is deployed.clusterID: rook-ceph # namespace:cluster# CephFS filesystem name into which the volume shall be createdfsName: myfs# Ceph pool into which the volume shall be created# Required for provisionVolume: "true"pool: myfs-data0# The secrets contain Ceph admin credentials. These are generated automatically by the operator# in the same namespace as the cluster.csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisionercsi.storage.k8s.io/provisioner-secret-namespace: rook-ceph # namespace:clustercsi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisionercsi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph # namespace:clustercsi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-nodecsi.storage.k8s.io/node-stage-secret-namespace: rook-ceph # namespace:cluster# (optional) The driver can use either ceph-fuse (fuse) or ceph kernel client (kernel)# If omitted, default volume mounter will be used - this is determined by probing for ceph-fuse# or by setting the default mounter explicitly via --volumemounter command-line argument.# mounter: kernel reclaimPolicy: Delete allowVolumeExpansion: true mountOptions:# uncomment the following line for debugging#- debug[root@k8s-master01 cephfs]# kubectl create -f storageclass.yaml storageclass.storage.k8s.io/rook-cephfs created之后將pvc的storageClassName設(shè)置成rook-cephfs即可創(chuàng)建共享文件類型的存儲,類似于NFS,可以給多個Pod共享數(shù)據(jù)。
掛載測試
[root@k8s-master01 cephfs]# pwd /rook/cluster/examples/kubernetes/ceph/csi/cephfs[root@k8s-master01 cephfs]# ls kube-registry.yaml pod.yaml pvc-clone.yaml pvc-restore.yaml pvc.yaml snapshotclass.yaml snapshot.yaml storageclass.yaml[root@k8s-master01 cephfs]# cat kube-registry.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata:name: cephfs-pvcnamespace: kube-system spec:accessModes:- ReadWriteManyresources:requests:storage: 1GistorageClassName: rook-cephfs --- apiVersion: apps/v1 kind: Deployment metadata:name: kube-registrynamespace: kube-systemlabels:k8s-app: kube-registrykubernetes.io/cluster-service: "true" spec:replicas: 3selector:matchLabels:k8s-app: kube-registrytemplate:metadata:labels:k8s-app: kube-registrykubernetes.io/cluster-service: "true"spec:containers:- name: registryimage: registry:2imagePullPolicy: Alwaysresources:limits:cpu: 100mmemory: 100Mienv:# Configuration reference: https://docs.docker.com/registry/configuration/- name: REGISTRY_HTTP_ADDRvalue: :5000- name: REGISTRY_HTTP_SECRETvalue: "Ple4seCh4ngeThisN0tAVerySecretV4lue"- name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORYvalue: /var/lib/registryvolumeMounts:- name: image-storemountPath: /var/lib/registryports:- containerPort: 5000name: registryprotocol: TCPlivenessProbe:httpGet:path: /port: registryreadinessProbe:httpGet:path: /port: registryvolumes:- name: image-storepersistentVolumeClaim:claimName: cephfs-pvcreadOnly: false[root@k8s-master01 cephfs]# kubectl create -f kube-registry.yaml persistentvolumeclaim/cephfs-pvc created deployment.apps/kube-registry created查看創(chuàng)建的pvc和pod
[root@k8s-master01 cephfs]# kubectl get po -n kube-system -l k8s-app=kube-registry NAME READY STATUS RESTARTS AGE kube-registry-5d6d8877f7-ngp2x 1/1 Running 0 34s kube-registry-5d6d8877f7-q7l8p 0/1 ContainerCreating 0 34s kube-registry-5d6d8877f7-qmtqv 0/1 ContainerCreating 0 34s[root@k8s-master01 cephfs]# kubectl get pvc -n kube-system NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-pvc Bound pvc-6b08e697-354f-473d-9423-459e154e3d46 1Gi RWX rook-cephfs 58s- 注意claimName為pvc的名稱。
- 此時一共創(chuàng)建了三個Pod,這三個Pod共用了一個存儲,掛載到了/var/lib/registry,該目錄三個容器共享數(shù)據(jù)。
Nginx掛載測試
apiVersion: v1 kind: Service metadata:name: nginxlabels:app: nginx spec:ports:- port: 80name: webselector:app: nginxtype: ClusterIP --- kind: PersistentVolumeClaim apiVersion: v1 metadata:name: nginx-share-pvc spec:storageClassName: rook-cephfs accessModes:- ReadWriteManyresources:requests:storage: 1Gi --- apiVersion: apps/v1 kind: Deployment metadata:name: web spec:selector:matchLabels:app: nginx # has to match .spec.template.metadata.labelsreplicas: 3 # by default is 1template:metadata:labels:app: nginx # has to match .spec.selector.matchLabelsspec:containers:- name: nginximage: nginx imagePullPolicy: IfNotPresentports:- containerPort: 80name: webvolumeMounts:- name: wwwmountPath: /usr/share/nginx/htmlvolumes:- name: wwwpersistentVolumeClaim:claimName: nginx-share-pvc第七章 PVC擴(kuò)容
文件共享類型的PVC擴(kuò)容需要k8s 1.15+
塊存儲類型的PVC擴(kuò)容需要k8s 1.16+
PVC擴(kuò)容需要開啟ExpandCSIVolumes,新版本的k8s已經(jīng)默認(rèn)打開了這個功能,可以查看自己的k8s版本是否已經(jīng)默認(rèn)打開了該功能:
如果default為true就不需要打開此功能,如果default為false,需要開啟該功能。
擴(kuò)容文件共享型PVC
找到剛才創(chuàng)建的文件共享型StorageClass,將allowVolumeExpansion設(shè)置為true(新版rook默認(rèn)為true,如果不為true更改后執(zhí)行kubectl replace即可):
找到第六章創(chuàng)建的pvc:
將大小修改為2Gi,之前是1Gi
保存退出:
查看PV和PVC的大小:
[root@k8s-master01 cephfs]# kubectl get pvc -n kube-system NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-pvc Bound pvc-0149518a-7346-4d16-9030-97b2f9b8e9d2 2Gi RWX rook-cephfs 46m [root@k8s-master01 cephfs]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-0149518a-7346-4d16-9030-97b2f9b8e9d2 2Gi RWX Delete Bound kube-system/cephfs-pvc rook-cephfs 47m pvc-fd8b9860-c0d4-4797-8e1d-1fab880bc9fc 2Gi RWO Delete Bound default/mysql-pv-claim rook-ceph-block 64m查看容器內(nèi)是否已經(jīng)完成擴(kuò)容:
[root@k8s-master01 cephfs]# kubectl exec -ti kube-registry-66d4c7bf47-46bpq -n kube-system -- df -Th | grep "/var/lib/registry"ceph 2.0G 0 2.0G 0% /var/lib/registry同樣的步驟可以擴(kuò)容到3G:
擴(kuò)容塊存儲
擴(kuò)容步驟類似,找到第五章創(chuàng)建PVC,直接edit即可
[root@k8s-master01 cephfs]# kubectl edit pvc mysql-pv-claim persistentvolumeclaim/mysql-pv-claim edited [root@k8s-master01 cephfs]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql-pv-claim Bound pvc-fd8b9860-c0d4-4797-8e1d-1fab880bc9fc 2Gi RWO rook-ceph-block 70m可以看到此時pvc并沒有擴(kuò)容,但是pv已經(jīng)擴(kuò)容:
[root@k8s-master01 cephfs]# kubectl get pv | grep default/mysql-pv-claim pvc-fd8b9860-c0d4-4797-8e1d-1fab880bc9fc 3Gi RWO Delete Bound default/mysql-pv-claim rook-ceph-block 75m也可以看到ceph dashboard的image也完成了擴(kuò)容,但是pvc和pod里面的狀態(tài)會有延遲,大概等待5-10分鐘后,即可完成擴(kuò)容:
第八章 PVC快照
注意:PVC快照功能需要k8s 1.17+
塊存儲快照
創(chuàng)建snapshotClass
[root@k8s-master01 rbd]# pwd /rook/cluster/examples/kubernetes/ceph/csi/rbd [root@k8s-master01 rbd]# kubectl create -f snapshotclass.yaml volumesnapshotclass.snapshot.storage.k8s.io/csi-rbdplugin-snapclass created創(chuàng)建快照
首先在之前創(chuàng)建的MySQL容器里創(chuàng)建一個文件夾,并創(chuàng)建一個文件
[root@k8s-master01 rbd]# kubectl get po NAME READY STATUS RESTARTS AGE nginx-6799fc88d8-m5gj5 1/1 Running 1 3d4h wordpress-mysql-6965fc8cc8-6wt6j 1/1 Running 0 86m[root@k8s-master01 rbd]# kubectl exec -ti wordpress-mysql-6965fc8cc8-6wt6j -- bashroot@wordpress-mysql-6965fc8cc8-6wt6j:/# ls bin boot dev docker-entrypoint-initdb.d entrypoint.sh etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr varroot@wordpress-mysql-6965fc8cc8-6wt6j:/# cd /var/lib/mysqlroot@wordpress-mysql-6965fc8cc8-6wt6j:/var/lib/mysql# ls auto.cnf ib_logfile0 ib_logfile1 ibdata1 lost+found mysql performance_schemaroot@wordpress-mysql-6965fc8cc8-6wt6j:/var/lib/mysql# mkdir test_snapshotroot@wordpress-mysql-6965fc8cc8-6wt6j:/var/lib/mysql# ls auto.cnf ib_logfile0 ib_logfile1 ibdata1 lost+found mysql performance_schema test_snapshotroot@wordpress-mysql-6965fc8cc8-6wt6j:/var/lib/mysql# echo "test for snapshot" > test_snapshot/1.txtroot@wordpress-mysql-6965fc8cc8-6wt6j:/var/lib/mysql# cat test_snapshot/1.txt test for snapshot修改snapshot.yaml文件的source pvc為創(chuàng)建的MySQL pvc:
創(chuàng)建快照及查看狀態(tài):
[root@k8s-master01 rbd]# kubectl create -f snapshot.yaml volumesnapshot.snapshot.storage.k8s.io/rbd-pvc-snapshot created[root@k8s-master01 rbd]# kubectl get volumesnapshotclass NAME DRIVER DELETIONPOLICY AGE csi-rbdplugin-snapclass rook-ceph.rbd.csi.ceph.com Delete 8m37s[root@k8s-master01 rbd]# kubectl get volumesnapshot NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE rbd-pvc-snapshot true mysql-pv-claim 3Gi csi-rbdplugin-snapclass snapcontent-715c2841-d1ba-493a-9eb9-52178df3c2e6 <invalid> <invalid>指定快照創(chuàng)建PVC
如果想要創(chuàng)建一個具有某個數(shù)據(jù)的PVC,可以從某個快照恢復(fù):
[root@k8s-master01 rbd]# cat pvc-restore.yaml --- apiVersion: v1 kind: PersistentVolumeClaim metadata:name: rbd-pvc-restore spec:storageClassName: rook-ceph-blockdataSource:name: rbd-pvc-snapshotkind: VolumeSnapshotapiGroup: snapshot.storage.k8s.ioaccessModes:- ReadWriteOnceresources:requests:storage: 3Gi[root@k8s-master01 rbd]# kubectl create -f pvc-restore.yaml persistentvolumeclaim/rbd-pvc-restore created注意:dataSource為快照名稱,storageClassName為新建pvc的storageClass,storage的大小不能低于原pvc的大小。
[root@k8s-master01 rbd]# kubectl create -f pvc-restore.yaml persistentvolumeclaim/rbd-pvc-restore created [root@k8s-master01 rbd]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql-pv-claim Bound pvc-fd8b9860-c0d4-4797-8e1d-1fab880bc9fc 3Gi RWO rook-ceph-block 95m rbd-pvc-restore Bound pvc-d86a7535-2331-4fef-ae93-c570c8d3f9e7 3Gi RWO rook-ceph-block 2s數(shù)據(jù)校驗
創(chuàng)建一個容器,掛載該P(yáng)VC,查看是否含有之前的文件:
[root@k8s-master01 rbd]# cat restore-check-snapshot-rbd.yaml --- apiVersion: apps/v1 kind: Deployment metadata:name: check-snapshot-restore spec:selector:matchLabels:app: check strategy:type: Recreatetemplate:metadata:labels:app: check spec:containers:- image: alpine:3.8name: checkcommand:- sh- -c- sleep 36000volumeMounts:- name: check-mysql-persistent-storagemountPath: /mntvolumes:- name: check-mysql-persistent-storagepersistentVolumeClaim:claimName: rbd-pvc-restore [root@k8s-master01 rbd]# kubectl create -f restore-check-snapshot-rbd.yaml deployment.apps/check-snapshot-restore created查看數(shù)據(jù)是否存在
[root@k8s-master01 rbd]# kubectl get po NAME READY STATUS RESTARTS AGE check-snapshot-restore-64b85c5f88-zvr62 1/1 Running 0 97s nginx-6799fc88d8-m5gj5 1/1 Running 1 3d5h wordpress-mysql-6965fc8cc8-6wt6j 1/1 Running 0 104m[root@k8s-master01 rbd]# kubectl exec -ti check-snapshot-restore-64b85c5f88-zvr62 -- ls bin etc lib mnt root sbin sys usr dev home media proc run srv tmp var[root@k8s-master01 rbd]# kubectl exec -ti check-snapshot-restore-64b85c5f88-zvr62 -- ls /mnt auto.cnf ibdata1 performance_schema ib_logfile0 lost+found test_snapshot ib_logfile1 mysql[root@k8s-master01 rbd]# kubectl exec -ti check-snapshot-restore-64b85c5f88-zvr62 -- ls /mnt/test_snapshot 1.txt [root@k8s-master01 rbd]# kubectl exec -ti check-snapshot-restore-64b85c5f88-zvr62 -- cat /mnt/test_snapshot/1.txt test for snapshot測試無誤后清理數(shù)據(jù)(snapshotclass可以不刪除,后期創(chuàng)建rbd快照直接用該snapshotclass即可):
[root@k8s-master01 rbd]# kubectl delete -f restore-check-snapshot-rbd.yaml -f pvc-restore.yaml -f snapshot.yaml deployment.apps "check-snapshot-restore" deleted persistentvolumeclaim "rbd-pvc-restore" deleted volumesnapshot.snapshot.storage.k8s.io "rbd-pvc-snapshot" deleted文件共享類型快照
操作步驟和塊存儲類型無區(qū)別,可以參考:官網(wǎng)
第九章 PVC克隆
[root@k8s-master01 rbd]# pwd /root/rook/cluster/examples/kubernetes/ceph/csi/rbd[root@k8s-master01 rbd]# cat pvc-clone.yaml --- apiVersion: v1 kind: PersistentVolumeClaim metadata:name: rbd-pvc-clone spec:storageClassName: rook-ceph-blockdataSource:name: mysql-pv-claimkind: PersistentVolumeClaimaccessModes:- ReadWriteOnceresources:requests:storage: 3Gi[root@k8s-master01 rbd]# kubectl create -f pvc-clone.yaml persistentvolumeclaim/rbd-pvc-clone created[root@k8s-master01 rbd]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql-pv-claim Bound pvc-fd8b9860-c0d4-4797-8e1d-1fab880bc9fc 3Gi RWO rook-ceph-block 110m rbd-pvc-clone Bound pvc-6dda14c9-9d98-41e6-9d92-9d4ea382c614 3Gi RWO rook-ceph-block 4s需要注意的是pvc-clone.yaml的dataSource的name是被克隆的pvc名稱,在此是mysql-pv-claim,storageClassName為新建pvc的storageClass名稱,storage不能小于之前pvc的大小。
第十章 測試數(shù)據(jù)清理
如果Rook要繼續(xù)使用,可以只清理創(chuàng)建的deploy、pod、pvc即可。之后可以直接投入使用
數(shù)據(jù)清理步驟:
首先清理掛載了PVC和Pod,可能需要清理單獨創(chuàng)建的Pod和Deployment或者是其他的高級資源
之后清理PVC,清理掉所有通過ceph StorageClass創(chuàng)建的PVC后,最好檢查下PV是否被清理
之后清理快照:kubectl delete volumesnapshot XXXXXXXX
之后清理創(chuàng)建的Pool,包括塊存儲和文件存儲
a) kubectl delete -n rook-ceph cephblockpool replicapool
b) kubectl delete -n rook-ceph cephfilesystem myfs
清理StorageClass:kubectl delete sc rook-ceph-block rook-cephfs
清理Ceph集群:kubectl -n rook-ceph delete cephcluster rook-ceph
刪除Rook資源:
a) kubectl delete -f operator.yaml
b) kubectl delete -f common.yaml
c) kubectl delete -f crds.yaml
如果卡住需要參考Rook的troubleshooting
a) https://rook.io/docs/rook/v1.6/ceph-teardown.html#troubleshooting
參考鏈接:https://rook.io/docs/rook/v1.6/ceph-teardown.html
總結(jié)
以上是生活随笔為你收集整理的k8s进阶篇-云原生存储ceph的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 转 提问的智慧
- 下一篇: OPC UA - Open62541学习