Kubernetes 10 - POD

Kubernetes 10 - POD

13

POD

Docker k8s POD node k8s

k8s k8s POD POD POD Pause POD POD

10.1

POD kubectl explain pods.spec.volumes

  1. HostPath node HostPath
  2. Local HostPath
  3. EmptyDir POD
  4. iSCSI NFS Cifs glusterfs cephfs EBS AWS) Disk Azone

10.2

K8S POD POD POD

  • POD kubectl explain pods.spec.containers.volumeMounts
apiVersion: v1 kind: Pod metadata: name: myapp namespace: default labels: app: myapp spec: containers: - name: myapp image: ikubernetes/myapp:v1 volumeMounts <[]Object> # mountPath <string> # mountPropagation <string> # name <string> # readOnly <boolean> # subPath <string> # subPathExpr <string> # subPath $(VAR_NAME)

10.3

10.3.1 hostpath

POD POD node POD

kubernetes.io/docs/concep

  • kubectl explain pods.spec.volumes.hostPath
path <string> # type <string> #
hostPath
DirectoryOrCreate
0755 Kubelet
Directory
FileOrCreate
0644 Kubelet
File
Socket
UNIX
CharDevice
BlockDevice
apiVersion: V1 kind: Pod Metadata: name: MyApp namespace: default Labels: App: MyApp spec: Containers: - name: MyApp Image: ikubernetes/MyApp: V1 volumeMounts: What container mount volumes # - name: Webstore # mount which volume MountPath: /usr/report this content share/nginx/HTML # directory to which to mount the container readOnly: false # whether the read-only volumes: # storage volumes belonging to the POD (not part of the container) - name: webstore # storage volume object name hostPath: # hostpath type storage volume object path: /data/myapp # In the host's directory type: DirectoryOrCreate # If it does not exist, create a copy of the code

10.3.2 gitRepo volume

Use the contents of the git warehouse as storage, connect to the warehouse when the POD is created, pull the warehouse, and mount it into the container as a storage volume.

It is actually based on emptyDir, but operations on the volume will not be synchronized to gitrepo.

Note: The git tool needs to be installed on each node node running pod for git pull

apiVersion: V1 kind: Pod Metadata: Labels: RUN: gitrepo name: gitrepo spec: Containers: - Image: Nginx: Latest name: gitrepo volumeMounts: - name: gitrepo MountPath: /usr/Share/Nginx/HTML Volumes: - name: gitrepo gitRepo: Repository: "https://gitee.com/rocket049/mysync.git" Revision: "Master" copy the code

10.3.3 emptyDir cache volume

It uses a directory on the host as the mount point. With the end of the POD life cycle, the data in it will also be lost, but it has a very big advantage that it can use memory as a storage space for mounting.

It can be used when there is some data in the two containers in the POD that need to be shared.

  • Define emptyDir parameters,
    kubectl explain pods.spec.volumes.emptyDir
Medium <String> # Use "" Disk indication to store, use memory Memory indication sizeLimit <string> Size # storage space constraints copy the code
  • Usage example
apiVersion: V1 kind: Pod Metadata: name: POD-Volume-Demo namespace: default Labels: App: MyApp Tier: frontend spec: Volumes: - name: HTML emptyDir: {} # use the disk, and there is no capacity constraints Containers: - name : myapp Image: ikubernetes/myapp: v1 imagePullPolicy: IfNotPresent volumeMounts: - name: HTML MountPath: /usr/report this content share/nginx/HTML/ the ports: - name: HTTP containerPort: 80 - name: https containerPort: 443 - name: busybox image: busybox:latest imagePullPolicy: IfNotPresent volumeMounts: - name: html mountPath: /data/ command: - "/bin/sh" - "-c" - "while true; do date >>/data/index.html; sleep 10; done"
apiVersion: v1 kind: Pod metadata: name: pod-volume-demo namespace: default labels: app: myapp tier: frontend spec: containers: - name: myapp image: ikubernetes/myapp:v1 imagePullPolicy: IfNotPresent volumeMounts: - name: html mountPath: /usr/share/nginx/html/ ports: - name: http containerPort: 80 - name: https containerPort: 443 - name: busybox image: busybox:latest imagePullPolicy: IfNotPresent volumeMounts: - name: html mountPath: /data/ command: - "/bin/sh" - "-c" - "while true; do date >>/data/index.html; sleep 10; done" volumes: - name: html emptyDir: medium: "" sizeLimit: 1536Mi

10.4

pod node

10.4.1 nfs

nfs node node POD

  • k8s node nfs
$ yum install nfs-utils # nfs $ mkdir -p/data/volumes # volume echo '/data/volumes 172.16.100.0/16(rw,no_root_squash)' >>/etc/exports # nfs $ systemctl start nfs # nfs $ ss -tnl # nfs TCP 2049
  • k8s node nfs
$ yum install nfs-utils $ mount -t nfs 172.16.100.104:/data/volumes/mnt
  • nfs kubectl explain pods.spec.volumes.nfs
path <string> # nfs readOnly <boolean> # server <string> # nfs
apiVersion: v1 kind: Pod metadata: name: pod-vol-nfs-demo namespace: default spec: containers: - name: myapp image: ikubernetes/myapp:v1 volumeMounts: - name: html mountPath: /usr/share/nginx/html/ volumes: - name: html nfs: path: /data/volumes server: 172.16.100.104

node

yum install nfs-utils
, Do not allow exceptions when mounting.

10.5 Distributed storage

Distributed storage can provide storage that is out of the node life cycle and is more robust than network storage. It is distributed and has strong high availability. However, the configuration of distributed storage is complicated. In the network storage provided by NFS, users need Only know the address of the NFS storage allocated to the POD can be used, and in the storage provided by the distributed storage capacity, the user needs to fully understand the configuration parameters of the distributed storage to be able to use the distributed storage.

Therefore, K8S provides two mechanisms, PV and PVC, so that ordinary users do not need to care about the configuration of the underlying storage parameters, but only need to explain how much persistent storage needs to be used.

Generally, PV and PVC are bound as a pair. PV belongs to the whole world, and PVC belongs to a certain namespace. When a PV is bound by a PVC, other namespace PVCs can no longer be bound. The request to bind a certain PV is completed by the PVC, and the PV bound by the PVC is called the binding state of the PV.

PVC is bound to a PV, then the POD defined in the namespace where the PVC is located can use persistentVolumeClaim type volumes, and then the container can mount the PVC type volume through volumeMounts.

Whether the persistentVolumeClaim volume allows multiple reads and writes depends on the read and write characteristics of PV definition: single read and write, multiple read and write, and multiple read only.

If a POD is no longer needed, we delete it and also delete the PVC. At this time, the PV can also have its own recycling strategy: delete deletes the PV, and retain does nothing.

10.5.1 PersistentVolume

A storage description added by the administrator is a cluster-level global resource, including storage type, storage size, and access mode. Its life cycle is independent of the Pod, for example, when the Pod using it is destroyed, it has no impact on the PV.

See: kubectl explain PersistentVolume.spec

  • Define storage on nfs,/etc/exports, and export nfs definition
/data/volumes/v1 172.16.100.0/16(rw,no_root_squash) /data/volumes/v2 172.16.100.0/16(rw,no_root_squash) /data/volumes/v3 172.16.100.0/16(rw,no_root_squash) /data/volumes/v4 172.16.100.0/16(rw,no_root_squash) /data/volumes/v5 172.16.100.0/16(rw,no_root_squash)
exportfs -arv
  • nfs k8s PersistentVolume kubectl explain PersistentVolume.spec.nfs
apiVersion: v1 kind: PersistentVolume metadata: name: pv-001 labels: name: pv001 spec: accessModes: - ReadWriteMany - ReadWriteOnce capacity: storage: 1Gi nfs: path: /data/volumes/v1 server: 172.16.100.104 --- apiVersion: v1 kind: PersistentVolume metadata: name: pv-002 labels: name: pv003 spec: accessModes: - ReadWriteMany - ReadWriteOnce capacity: storage: 2Gi nfs: path: /data/volumes/v2 server: 172.16.100.104 --- apiVersion: v1 kind: PersistentVolume metadata: name: pv-003 labels: name: pv003 spec: accessModes: - ReadWriteMany - ReadWriteOnce capacity: storage: 3Gi nfs: path: /data/volumes/v3 server: 172.16.100.104
kubectl get persistentvolume NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv-001 1Gi RWO,RWX Retain Available 3m38s pv-002 2Gi RWO,RWX Retain Available 3m38s pv-003 3Gi RWO,RWX Retain Available 3m38s

10.5.2. PersistentVolumeClaim

Namespace PV

  • PVC kubectl explain PersistentVolumeClaim.spec
accessModes <[]string> # ReadWriteOnce # ReadOnlyMany # - ReadWriteMany # - dataSource <Object> # Volume Snapshot resources <Object> # PersistentVolume selector <Object> # PersistentVolume storageClassName <string> # volumeMode <string> # PersistentVolume volumeName <string> # PersistentVolume PersistentVolume selector
  • volumes PVC kubectl explain pods.spec.volumes.persistentVolumeClaim
persistentVolumeClaim claimName <string> # PVC readOnly <boolean> #
  • PersistentVolumeClaim kubectl explain PersistentVolumeClaim.spec
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc namespace: default spec: accessModes: - ReadWriteMany # resources: # requests: # PV storage: 2Gi #
  • pod persistentVolumeClaim volumes volumeMounts
apiVersion: v1 kind: Pod metadata: name: pod-vol-nfs-demo namespace: default spec: containers: - name: myapp image: ikubernetes/myapp:v1 volumeMounts: - name: html mountPath: /usr/share/nginx/html/ volumes: - name: html persistentVolumeClaim: claimName: my-pvc # PVC

10.5.3 StorageClass

PVC PV PV k8s StorageClass PVC PV StorageClass PV

StorageClass CephFS NFS PV RESTfull

10.6 StorageClass Ceph RBD

10.6.1 Ceph

  • ceph
yum install -y ceph-common # ceph-common
ceph osd pool create kube 4096 # pool ceph osd pool ls # pool ceph auth get-or-create client.kube mon 'allow r' osd 'allow rwx pool=kube' -o/etc/ceph/ceph.client.kube.keyring ceph auth list # client.kube kube pool scp/etc/ceph/ceph.client.kube.keyring node1:/etc/ceph/ # keyring ceph scp/etc/ceph/ceph.client.kube.keyring node1:/etc/ceph/

10.6.2 rbd-provisioner

  • 1.12 kube-controller-manager rbd StorageClass provisioner
https://github.com/kubernetes-incubator/external-storage/tree/master/ceph/rbd/deploy/rbac # rbd-provisioner
$ git clone https://github.com/kubernetes-incubator/external-storage.git # rbd-provisioner $ cat >>external-storage/ceph/rbd/deploy/rbac/clusterrole.yaml<<EOF # rbd-provisioner ceph - apiGroups: [""] resources: ["secrets"] verbs: ["create", "get", "list", "watch"] EOF $ kubectl apply -f external-storage/ceph/rbd/deploy/rbac/ # rbd-provisioner

10.6.3 StorageClass

  • CephX secret
https://github.com/kubernetes-incubator/external-storage/tree/master/ceph/rbd/examples # rbd-provisioner ceph rbd
--- apiVersion: v1 kind: Secret metadata: name: ceph-admin-secret namespace: kube-system type: "kubernetes.io/rbd" data: # ceph auth get-key client.admin | base64 # keyring base64 key: QVFER3U5TmM1NXQ4SlJBQXhHMGltdXZlNFZkUXRvN2tTZ1BENGc9PQ== --- apiVersion: v1 kind: Secret metadata: name: ceph-secret namespace: kube-system type: "kubernetes.io/rbd" data: # ceph auth get-key client.kube | base64 # keyring base64 key: QVFCcUM5VmNWVDdQRlJBQWR1NUxFNzVKeThiazdUWVhOa3N2UWc9PQ==
  • StorageClass rbd-provisioner
--- kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: ceph-rbd provisioner: ceph.com/rbd reclaimPolicy: Retain parameters: monitors: 172.16.100.9:6789 pool: kube adminId: admin adminSecretName: ceph-admin-secret adminSecretNamespace: kube-system userId: kube userSecretName: ceph-secret userSecretNamespace: kube-system fsType: ext4 imageFormat: "2" imageFeatures: "layering"
  • PersistentVolumeClaim
--- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: ceph-rbd-pvc data-kong-postgresql-0 spec: storageClassName: ceph-rbd accessModes: - ReadWriteOnce resources: requests: storage: 1Gi
  • POD PVC PVC
--- apiVersion: v1 kind: Pod metadata: name: ceph-sc-pvc-demo namespace: default spec: containers: - name: myapp image: ikubernetes/myapp:v1 volumeMounts: - name: pvc-volume mountPath: /usr/share/nginx/html/ volumes: - name: pvc-volume persistentVolumeClaim: claimName: ceph-rbd-pvc

github.com/redhatxl/aw