K8S存储之Ceph挂载demo

挂载方式:

1) Ceph RBD的方式
2) cephfs的方式
1)K8S使用Ceph RBD作为后端存储挂载实例:
1.创建ceph 存储池:

ceph osd pool create k8s 64

2.生成加密key:

grep key /etc/ceph/ceph.client.admin.keyring |awk ‘{printf “%s”, $NF}’|base64

QVFDTVUzaGRLZWtCR1JBQTMrSHcvSXNYZDJkZVZGWGtuM051VXc9PQ==
3.创建ceph的secret:

cat ceph-secret.yaml

apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
type: “kubernetes.io/rbd”
data:
key: QVFDTVUzaGRLZWtCR1JBQTMrSHcvSXNYZDJkZVZGWGtuM051VXc9PQ==

kubectl create -f ceph-secret.yaml

4.创建StorageClass存储类(自动创建PV的机制,即:Dynamic Provisioning。充当PV的模板,从而可以动态创建PV):

cat ceph-class.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ceph-web
provisioner: kubernetes.io/rbd
parameters:
monitors: ceph-node01:6789
adminId: admin
adminSecretName: ceph-secret
adminSecretNamespace: default
pool: k8s
userId: admin
userSecretName: ceph-secret

kubectl create -f ceph-class.yaml

kubectl get pv

5.创建PVC:

cat PersistentVolumeClaim.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ceph
spec:
accessModes:
– ReadWriteOnce
storageClassName: ceph-web
resources:
requests:
storage: 2G

kubectl create -f PersistentVolumeClaim.yaml

kubectl get pvc

6.创建Pod:

cat ceph-pod.yaml

apiVersion: v1
kind: Pod
metadata:
name: ceph-pod1
spec:
containers:

  • name: ceph-nginx image: xxx.xxx.xxx/hj-demo/nginx:latest command: [“sleep”, “60000”] volumeMounts:
    • name: ceph-rbd-vol1
      mountPath: /mnt/ceph-rbd-pvc/busybox
      readOnly: false
      volumes:
  • name: ceph-rbd-vol1
    persistentVolumeClaim:
    claimName: ceph

kubectl create -f ceph-pod.yaml

kubectl get pods

7.验证读写:

kubectl get secret

NAME TYPE DATA AGE
ceph-secret kubernetes.io/rbd 1 26m
default-token-bfqfq kubernetes.io/service-account-token 3 62d
[root@ceph-node01 k8s-ceph]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
ceph Bound pvc-80963a50-daba-11e9-ae24-005056903bd0 1908Mi RWO ceph-web 20m
[root@ceph-node01 k8s-ceph]# kubectl get pods
NAME READY STATUS RESTARTS AGE
ceph-pod1 1/1 Running 0 19m
[root@ceph-node01 k8s-ceph]# kubectl exec -ti ceph-pod1 -c ceph-nginx /bin/bash
root@ceph-pod1:/# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 112G 30G 82G 27% /
tmpfs 64M 0 64M 0% /dev
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/mapper/centos-root 112G 30G 82G 27% /etc/hosts
shm 64M 0 64M 0% /dev/shm
/dev/rbd0 1.9G 5.7M 1.8G 1% /mnt/ceph-rbd-pvc/busybox
tmpfs 1.9G 12K 1.9G 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 1.9G 0 1.9G 0% /proc/acpi
tmpfs 1.9G 0 1.9G 0% /proc/scsi
tmpfs 1.9G 0 1.9G 0% /sys/firmware
root@ceph-pod1:/# cd /mnt/ceph-rbd-pvc/busybox/
root@ceph-pod1:/mnt/ceph-rbd-pvc/busybox# ls
lost+found test.txt
root@ceph-pod1:/mnt/ceph-rbd-pvc/busybox# cat test.txt
abc
root@ceph-pod1:/mnt/ceph-rbd-pvc/busybox# rm -f test.txt
root@ceph-pod1:/mnt/ceph-rbd-pvc/busybox# ls
lost+found
8.查看ceph存储:

ceph df

GLOBAL:
SIZE AVAIL RAW USED %RAW USED
150 GiB 147 GiB 3.2 GiB 2.12
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
.rgw.root 1 1.1 KiB 0 46 GiB 4
default.rgw.control 2 0 B 0 46 GiB 8
default.rgw.meta 3 1.2 KiB 0 46 GiB 8
default.rgw.log 4 0 B 0 46 GiB 207
rbd_pool 6 19 B 0 46 GiB 2
.rgw 7 0 B 0 46 GiB 0
.rgw.control 8 0 B 0 46 GiB 0
.rgw.gc 9 0 B 0 46 GiB 0
.rgw.buckets 10 0 B 0 46 GiB 0
.rgw.buckets.index 11 0 B 0 46 GiB 0
.rgw.buckets.extra 12 0 B 0 46 GiB 0
.log 13 0 B 0 46 GiB 0
.intent-log 14 0 B 0 46 GiB 0
.usage 15 0 B 0 46 GiB 0
.users 16 0 B 0 46 GiB 0
.users.email 17 0 B 0 46 GiB 0
.users.swift 18 0 B 0 46 GiB 0
.users.uid 19 0 B 0 46 GiB 0
default.rgw.buckets.index 20 0 B 0 46 GiB 1
k8s 23 41 MiB 0.09 46 GiB 20
9.卸载存储:

// 删除pv,pvc和对应的pod

kubectl delete -f ./

// 然后删除rbd对应自动创建的image

rbd rm kubernetes-dynamic-pvc-88e17248-daba-11e9-ada3-005056903bd0 -p k8s

2)K8S使用cephfs作为后端存储挂载实例:
1.创建cephfs池:

ceph osd pool create fs_data 8

ceph osd pool create fs_metadata 8

ceph osd lspools

// 创建一个CephFS

ceph fs new cephfs fs_metadata fs_data

// 查看

ceph fs ls

2.创建pv,pvc和pods:

[root@ceph-node01 ceph-demo]# cat cephfs-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: cephfs-pv
spec:
capacity:
storage: 1Gi
accessModes:
– ReadWriteMany
cephfs:
monitors:
– ceph-node01:6789
path: /
user: admin
readOnly: false
secretRef:
name: ceph-secret
persistentVolumeReclaimPolicy: Recycle
[root@ceph-node01 ceph-demo]# cat cephfs-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: cephfs-pvc
spec:
accessModes:
– ReadWriteMany
volumeName: cephfs-pv
resources:
requests:
storage: 1Gi
[root@ceph-node01 ceph-demo]# cat cephfs-pods.yaml
apiVersion: v1
kind: Pod
metadata:
name: ceph-pod2
spec:
containers:

  • name: ceph-nginx image: xxx.xxx.xxx/hj-demo/nginx:latest command: [“sleep”, “60000”] volumeMounts:
    • name: cephfs-1
      mountPath: /mnt/cephfs
      readOnly: false
      volumes:
  • name: cephfs-1
    persistentVolumeClaim:
    claimName: cephfs-pvc
    [root@ceph-node01 ceph-demo]# kubectl apply -f ./
    [root@ceph-node01 ceph-demo]# kubectl get pods
    [root@ceph-node01 ceph-demo]# kubectl exec -ti ceph-pod2 /bin/bash
    root@ceph-pod2:/# df -h
    Filesystem Size Used Avail Use% Mounted on
    overlay 112G 21G 91G 19% /
    tmpfs 64M 0 64M 0% /dev
    tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
    /dev/mapper/centos-root 112G 21G 91G 19% /etc/hosts
    10.21.17.204:6789:/ 46G 0 46G 0% /mnt/cephfs
    shm 64M 0 64M 0% /dev/shm
    tmpfs 1.9G 12K 1.9G 1% /run/secrets/kubernetes.io/serviceaccount
    tmpfs 1.9G 0 1.9G 0% /proc/acpi
    tmpfs 1.9G 0 1.9G 0% /proc/scsi
    tmpfs 1.9G 0 1.9G 0% /sys/firmware

root@ceph-pod2:/# cd /mnt/cephfs/
root@ceph-pod2:/mnt/cephfs# cat k8s-cephfs-test.txt
hello,k8s,cephfs
挂载正常并可以正常读写。

3.卸载:

// 删除pv,pvc和对应的pod

kubectl delete -f ./

// 然后清理cephfs的pools即可。

声明:文中观点不代表本站立场。本文传送门:https://eyangzhen.com/235156.html

(0)
联系我们
联系我们
分享本页
返回顶部