下面做的都是一个example

1)emptyDir,临时存储

pod删除,存储也被删除,无持久性,最终存储放到运行的节点上,
底层就是docker,

[root@master01 volume]# cat pod1.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: volume-test
  name: volume-test
  namespace: project
spec:
  volumes: 
  - name: volume1 
    emptyDir: {}
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: volume
    resources: {}
    volumeMounts: 
    - name: volume1 
      mountPath: /usr/share/nginx/html
  nodeSelector:
    storage: ssd 
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}


[root@master01 volume]# kubectl exec -n project  -ti volume-test  -- /bin/bash
root@volume-test:/usr/share/nginx/html# echo "1234567" >> test.html
root@volume-test:/usr/share/nginx/html# cat test.html  


[root@master01 volume]# kubectl get pod -n project  -o wide
NAME          READY   STATUS    RESTARTS   AGE   IP               NODE     NOMINATED NODE   READINESS GATES
volume-test   1/1     Running   1          20m   10.244.196.185   node01   <none>           <none>

[root@master01 volume]# curl 10.244.196.185/test.html
1234567
节点查看:
[root@node01 ~]# find / -name test.html
/var/lib/kubelet/pods/3bbb6928-6c0b-4da8-9114-29c28c8ecbf9/volumes/kubernetes.io~empty-dir/volume1/test.html
^C
[root@node01 ~]# cat /var/lib/kubelet/pods/3bbb6928-6c0b-4da8-9114-29c28c8ecbf9/volumes/kubernetes.io~empty-dir/volume1/test.html
1234567


pod删除数据丢失:
[root@master01 volume]# kubectl delete  -f pod1.yaml 
pod "volume-test" deleted
[root@node01 ~]# cat /var/lib/kubelet/pods/3bbb6928-6c0b-4da8-9114-29c28c8ecbf9/volumes/kubernetes.io~empty-dir/volume1/test.html
cat: /var/lib/kubelet/pods/3bbb6928-6c0b-4da8-9114-29c28c8ecbf9/volumes/kubernetes.io~empty-dir/volume1/test.html: 没有那个文件或目录 

2)hostpath ,本地存储

有持久性,但是只能本机路径使用,挂载本机一个目录。无需手动创建目录(pod运行时使用的是root,用户,所以新建的目录和文件也是root.) 数据无法同步给其他node,节点,有数据持久性

[root@master01 volume]# kubectl create  -f pod2.yaml 
pod/volume-test created
[root@master01 volume]# cat pod2.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: volume-test
  name: volume-test
  namespace: project
spec:
  volumes: 
  - name: volume1 
    hostPath: 
      path: /data
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: volume
    resources: {}
    volumeMounts: 
    - name: volume1 
      mountPath: /usr/share/nginx/html
  nodeSelector:
    storage: ssd 
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
[root@master01 volume]# kubectl get pods -n project  -o wide
NAME          READY   STATUS    RESTARTS   AGE   IP               NODE     NOMINATED NODE   READINESS GATES
volume-test   1/1     Running   0          18s   10.244.196.177   node01   <none>           <none>
[root@master01 volume]# curl 10.244.196.177  #里面没东西
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.21.5</center>
</body>
</html>


去node01上面创建点东西
[root@node01 ~]# echo 'welcome to emporerlinux!!!' > /data/index.html
[root@node01 ~]# 
[root@master01 volume]# curl 10.244.196.177
welcome to emporerlinux!!!

pod删除,数据保留
[root@master01 volume]# kubectl delete  -f pod2.yaml 
pod "volume-test" deleted
[root@node01 ~]# cat /data/index.html 
welcome to emporerlinux!!!

3)共享存储

使用外部共享的iscsi,nfs,cephfs,fc.等等都可以。 kubectl explain pod.spec.volumes,可以查看。外部存储挂载时,由物理节点挂载,不是pod本身。iscsi,数据无法同步给其他节点。块设备,挂载需格式化。(单机) 有数据持久性
[root@master01 ~]# kubectl explain pod.spec.volumes.nfs 不知道的,一步一步往下查,

前置自己搭建好nfs 服务器

[root@master01 volume]# cat pod3.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: volume-test
  name: volume-test
  namespace: project
spec:
  volumes: 
  - name: volume1 
    nfs:
      server: 192.168.5.140
      path: /data
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: volume
    resources: {}
    volumeMounts: 
    - name: volume1 
      mountPath: /usr/share/nginx/html
  nodeSelector:
    storage: ssd 
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
[root@master01 volume]# kubectl apply -f pod3.yaml 
pod/volume-test created
[root@master01 volume]# kubectl get pod -n project 
NAME          READY   STATUS    RESTARTS   AGE
volume-test   1/1     Running   0          8s
[root@master01 volume]# kubectl get pod -n project  -o wide
NAME          READY   STATUS    RESTARTS   AGE   IP               NODE     NOMINATED NODE   READINESS GATES
volume-test   1/1     Running   0          14s   10.244.196.190   node01   <none>           <none>

nfs:
[root@emporer data]# echo "welcome to emporerlinux" >> index.html
[root@emporer data]# exportfs -v
/data           192.168.5.*(async,wdelay,hide,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)

测试:
[root@master01 volume]# curl 10.244.196.190
welcome to emporerlinux


删除pod,数据持久化
[root@master01 volume]# kubectl delete  -f pod3.yaml 
pod "volume-test" deleted
[root@emporer data]# cat index.html 
welcome to emporerlinux

4) pv 持久存储卷

pv 持久存储卷,在k8s中由集群管理员创建,用于对接后端,,全局资源,各namespace可见

pvc 持久存储请求,由k8s项目管理员在namespace中创建,并与pv进行管理
pvc 与pv 绑定前置条件:
1,storage大小: pvc的大小,小于等于pv的大小
2,accessModes: pvc与pv一致
RaedWriteMany ,多节点可读写
RaedWriteOnce,单节点可读写
ReadOnlyMany, 多节点只读写
3,storageClassName ,要么不定义,如果定义,pv中的 storageClassName 一定要与PVC中的 storageClassName 一致

pv的回收模式
persistentVolumeReclaimPolicy:
1,recycle,可回收利用,关联pv的pvc删除后,pv中数据也会被删除,pv状态为Available 可与其他的pvc绑定
2,retain ,保留,关联pv的pvc删除之后,数据保留,pv状态变为Released,不可于其他pvc关联,且数据保留。

回收模式实验

创建pv

[root@master01 volume]# kubectl create  -f pv.yaml 
persistentvolume/test-pv created
[root@master01 volume]# kubectl get pv
NAME      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
test-pv   10Gi       RWX            Recycle          Available           slow                    5s
[root@master01 volume]# cat pv.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: test-pv
spec:
  capacity:
    storage: 10Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: slow 
  mountOptions:
    - hard
    - nfsvers=4.1
  nfs:
    path: /data
    server: 192.168.5.140
 [root@master01 volume]# kubectl get pv
NAME      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
test-pv   10Gi       RWX            Recycle          Available           slow                    47s

创建pvc: PVC—> PV yiyi对应

  [root@master01 volume]# kubectl get pvc -n project 
No resources found in project namespace.
[root@master01 volume]# cat pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-pvc
  namespace: project
spec:
  accessModes:
    - ReadWriteMany
  volumeMode: Filesystem
  resources:
    requests:
      storage: 10Gi
  storageClassName: slow
    
[root@master01 volume]# kubectl create  -f pvc.yaml 
persistentvolumeclaim/test-pvc created
[root@master01 volume]# kubectl get pvc -n project 
NAME       STATUS   VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-pvc   Bound    test-pv   10Gi       RWX            slow           12s
[root@master01 volume]# kubectl get pv
NAME      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM              STORAGECLASS   REASON   AGE
test-pv   10Gi       RWX            Recycle          Bound    project/test-pvc   slow                    7m23s

在pod中使用pvc.

[root@master01 volume]# cat pod4.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: volume-test
  name: volume-test
  namespace: project
spec:
  volumes: 
  - name: volume1 
    persistentVolumeClaim:
      claimName: test-pvc
  containers:
  - image: nginx
    imagePullPolicy: IfNotPresent
    name: volume
    resources: {}
    volumeMounts: 
    - name: volume1 
      mountPath: /usr/share/nginx/html
  nodeSelector:
    storage: ssd 
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
[root@master01 volume]# kubectl create  -f pod4.yaml 
pod/volume-test created
[root@master01 ~]# kubectl get pv
NAME      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM              STORAGECLASS   REASON   AGE
test-pv   10Gi       RWX            Recycle          Bound    project/test-pvc   slow                    24m

[root@master01 ~]# kubectl get pvc -n project 
NAME       STATUS   VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-pvc   Bound    test-pv   10Gi       RWX            slow           17m
[root@master01 ~]# kubectl get pod -n project 
NAME          READY   STATUS    RESTARTS   AGE
volume-test   1/1     Running   0          8m11s
[root@master01 ~]# kubectl get pod -n project -o wide
NAME          READY   STATUS    RESTARTS   AGE     IP               NODE     NOMINATED NODE   READINESS GATES
volume-test   1/1     Running   0          8m15s   10.244.196.191   node01   <none>           <none>
[root@master01 ~]# curl 10.244.196.191
welcome to emporerlinux
[root@master01 ~]# 

实际关系就是pv关联nfs上的存储,而pvc关联pv,pod调用pvc最终访问到nfs上的data目录

回收模式测试。删除pvc ,pv状态Avaliable.
现在,bound 绑定状态

[root@master01 ~]# kubectl get  pv
NAME      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM              STORAGECLASS   REASON   AGE
test-pv   10Gi       RWX            Recycle          Bound    project/test-pvc   slow                    29m

[root@master01 volume]# kubectl delete  -f pod4.yaml 
pod "volume-test" deleted
[root@master01 volume]# kubectl get pvc -n project 
NAME       STATUS   VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-pvc   Bound    test-pv   10Gi       RWX            slow           23m
[root@master01 volume]# kubectl delete  -f pvc.yaml 
persistentvolumeclaim "test-pvc" deleted
[root@master01 volume]# kubectl get pvc -n project 
No resources found in project namespace.
[root@master01 volume]# kubectl get pv
NAME      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM              STORAGECLASS   REASON   AGE
test-pv   10Gi       RWX            Recycle          Released   project/test-pvc   slow                    30m
[root@master01 volume]# kubectl get pv
NAME      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
test-pv   10Gi       RWX            Recycle         # Available #         slow                    31m

nfs上的数据丢失,
[root@emporer ~]# cd /data
[root@emporer data]# ls

保留模式,可以直接edit,pv 修改为retain

    path: /data
    server: 192.168.5.140
  persistentVolumeReclaimPolicy: Retain
  storageClassName: slow
  
  [root@master01 volume]# kubectl get pv
NAME      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
test-pv   10Gi       RWX            Recycle          Available           slow                    34m
[root@master01 volume]# kubectl edit pv test-pv 
persistentvolume/test-pv edited
[root@master01 volume]# kubectl get pv
NAME      CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
test-pv   10Gi       RWX            Retain           Available           slow                    35m

很好理解,就不做实验了。。一个对爱情至死不渝(retain),,,一个爱要不要,其他人要(recycle)

上面用的pod实验,可以结合资源控制器
还有什么对接ceph存储啥的,,fc k8s更新太快,都是看帮助explain,copy改实际环境。。
把各个yaml文件,编排起来就是一个完整的yaml文件了,,还有helm啥的,,

文章作者: emporer
版权声明: 本站所有文章除特别声明外,均采用 CC BY-NC-SA 4.0 许可协议。转载请注明来自 Emporer-Linux
kubernetes 持久存储 kubernetes
喜欢就支持一下吧