cephfs java api_Kubernetes使用cephfs作为后端存储
這里使用了k8s自身的持久化卷存儲(chǔ)機(jī)制:PV和PVC。各組件之間的關(guān)系參考下圖:
PV的Access Mode(訪問(wèn)模式)
The access modes are:
ReadWriteOnce – the volume can be mounted as read-write by a single node
ReadOnlyMany – the volume can be mounted read-only by many nodes
ReadWriteMany – the volume can be mounted as read-write by many nodes
In the CLI, the access modes are abbreviated to:
RWO - ReadWriteOnce
ROX - ReadOnlyMany
RWX - ReadWriteMany
Reclaim Policy(回收策略)
Current reclaim policies are:
Retain – manual reclamation
Recycle – basic scrub (“rm -rf /thevolume/*”)
Delete – associated storage asset such as AWS EBS, GCE PD, Azure Disk, or OpenStack Cinder volume is deleted
Currently, only NFS and HostPath support recycling. AWS EBS, GCE PD, Azure Disk, and Cinder volumes support deletion.
PV是有狀態(tài)的資源對(duì)象,有以下幾種狀態(tài):
1、Available:空閑狀態(tài)
2、Bound:已經(jīng)綁定到某個(gè)pvc上
3、Released:對(duì)應(yīng)的pvc已經(jīng)刪除,但資源還沒(méi)有被回收
4、Failed:pv自動(dòng)回收失敗
1、創(chuàng)建secret。在secret中,data域的各子域的值必須為base64編碼值。
#echo "AQDchXhYTtjwHBAAk2/H1Ypa23WxKv4jA1NFWw==" | base64
QVFEY2hYaFlUdGp3SEJBQWsyL0gxWXBhMjNXeEt2NGpBMU5GV3c9PQo=
#vim ceph-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
data:
key: QVFEY2hYaFlUdGp3SEJBQWsyL0gxWXBhMjNXeEt2NGpBMU5GV3c9PQo=
2、創(chuàng)建pv。pv只能是網(wǎng)絡(luò)存儲(chǔ),不屬于任何node,但可以在每個(gè)node上訪問(wèn)。pv并不是定義在pod上的,而是獨(dú)立定義于pod之外。目前pv僅支持定義存儲(chǔ)容量。
#vim ceph-pv.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: cephfs
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
fsType: xfs
cephfs:
monitors:
- 172.16.100.5 :6789
- 172.16.100.6 :6789
- 172.16.100.7 :6789
path: /opt/eshop_dir/eshop
user: admin
secretRef:
name: ceph-secret
3、創(chuàng)建pvc
#vim ceph-pvc.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: cephfs
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 8Gi
4、查看pv和pvc
#kubectl get pv
cephfs 10Gi RWX Retain Bound default/cephfs 2d
#kubectl get pvc
cephfs Bound cephfs 10Gi RWX 2d
5、創(chuàng)建rc,這個(gè)只是測(cè)試樣例
#vim ceph-rc.yml
kind: ReplicationController
metadata:
name: cephfstest
labels:
name: cephfstest
spec:
replicas: 4
selector:
name: cephfstest
template:
metadata:
labels:
name: cephfstest
spec:
containers:
- name: cephfstest
image: 172.60.0.107/pingpw/nginx-php:v4
env:
- name: GET_HOSTS_FROM
value: env
ports:
- containerPort: 81
volumeMounts:
- name: cephfs
mountPath: "/opt/cephfstest"
volumes:
- name: cephfs
persistentVolumeClaim:
claimName: cephfs
5、查看pod
#kubectl get pod -o wide
cephfstest-09j37 1/1 Running 0 2d 10.244.5.16 kuber-node03
cephfstest-216r6 1/1 Running 0 2d 10.244.3.25 kuber-node01
cephfstest-4sjgr 1/1 Running 0 2d 10.244.4.26 kuber-node02
cephfstest-p2x7c 1/1 Running 0 2d 10.244.6.22 kuber-node04
總結(jié)
以上是生活随笔為你收集整理的cephfs java api_Kubernetes使用cephfs作为后端存储的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: 怎么将系统安装到u盘里 安装系统到U盘的
- 下一篇: 惠普暗影精灵4笔记本bios怎么进入 如