kubernetes 创建存储类的过程分成多个步骤:
1.搭建nfs服务器,并且共享nfs存储,提供给云供应商。
yum install nfs-server rpcbind -y
mkdir -p /nfsdata/share
chmod -R 666 /nfsdata/share
chown -R nobody:nobody /nfsdata/share
vim /etc/exports:
/nfsdata/share *(rw,no_root_squash,no_all_squash,sync)
exportfs -r
systemctl enable nfs-server && systemctl enable rpcbind
showmount -e $nfs_server
举例:
showmount -e 192.168.61.156(NFS.服务器的IP地址)
2.创建nfs-client的资源配置清单文件:
kubectl create ns nfs-storageclass
touch nfs-client-provisioner.yaml
vim nfs-client-provisioner.yaml:
apiVersion: v1
kind: Deployment
metadata:
name: nfs-client-provisioner
namespace: nfs-storageclass
spec:
replicas: 2
selector:
matchLabels:
app: nfs-client-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccount: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: k8s.m.daocloud.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
imagePullPolicy: Always
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
value: 192.168.61.156(NFS Server的IP地址)
- name: NFS_PATH
value: /nfsdata/share
volumes:
- name: nfs-client-root
nfs:
server: 192.168.61.156(NFS_Server的IP地址)
path: /nfsdata/share
创建nfs-client供应商:
kubectl create -f nfs-client-provisioner.yaml
3.分配权限:
touch serviceaccount.yaml
vim serviceaccount.yaml:
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
namespace: nfs-storageclass
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: run-nfs-client-provisioner
namespace: nfs-storageclass
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: leader-locking-nfs-client-provisioner
namespace: nfs-storageclass
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: leader-locking-nfs-client-provisioner
namespace: nfs-storageclass
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: nfs-storageclass
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
kubectl create -f serviceaccount.yaml
4.创建存储类:
touch storageclass.yaml
vim storageclass.yaml:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-client
namespace: nfs-storageclass
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
pathPattern: "${.PVC.namespace}/${.PVC.name}"
onDelete: delete
kubectl create -f storageclass.yaml
5.创建测试pod:
touch pod.yaml
vim pod.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-claim
namespace: nfs-storageclass
annotations:
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
storageClassName: nfs-client
---
apiVersion: v1
kind: Pod
metadata:
name: test-pod
namespace: nfs-storageclass
spec:
containers:
- name: test-pod
image: nginx
imagePullPolicy: Always
volumeMounts:
- name: nfs-pvc
mountPath: "/usr/share/nginx/html"
restartPolicy: "Never"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: test-claim
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-claim-1
namespace: nfs-storageclass
annotations:
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
storageClassName: nfs-client
---
apiVersion: v1
kind: Pod
metadata:
name: test-pod-1
namespace: nfs-storageclass
spec:
containers:
- name: test-pod
image: nginx
imagePullPolicy: Always
volumeMounts:
- name: nfs-pvc
mountPath: "/usr/share/nginx/html"
restartPolicy: "Never"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: test-claim-1
kubectl create -f pod.yaml
pod的数据持久化,放在/nfsdata/share/$namespace/$pvc_name目录中,例如:
/nfsdata/share/nfs-storageclass/test-claim/
把这个nfs服务器抽象成了nfs client的供应商,这个nfs-client的供应商在kubernetes集群内部又变成了存储类。当有一个pod创建了一个pvc,那它就会去要供应商去创建出来期望的PV。这时存储类strorageclass就会创建出来pvc需要的pv存储卷,然后pvc再和pv存储卷进行绑定关联,pod就跟nfs背后创建出来了一个目录,绑定在一起。
这就是存储类的整套逻辑。
存储类storageclass.yaml里面设置的回收方式是delete,当pod被删除,pvc被删除后,后端的pv也会被回收。
删除模式只有云供应商和openstack cinder能够支持。
nfs-client模拟的就是云供应商的接口,所以可以做到回收策略为删除模式。当删除了pvc,pv就会被自动删除。
存储类storageclass.yaml中的回收模式onDelete: delete可以改成保留模式Retain。
鼓励的话语:即使身处黑暗,也要相信心中有光,照亮前行的道路!