前言
之前把k8s装起来了,但是没有装什么应用,只装了ingress-nginx和dashboard,看了下helm3之后决定用helm3装一下es
持久存储
因为创建的pod可以被销毁重建,临时存储的数据会丢失,而如果使用hostpath方式将数据挂载出去,当pod漂移的情况下,数据是没办法跨节点保存的,k8s就有了pv跟pvc的概念,而pvc使用storageclass可以动态创建pv。
创建nfs的storageclass
我最开始用的是helm中带的nfs-server-provisioner
使用nfs-server-provisioner
github地址:github.com/helm/charts…去github上看了一下它的介绍,大概明白怎么用了,安装命令如下:
helm install storageclass-nfs stable/nfs-server-provisioner -f storageclass-config.yml
复制代码
persistence:
##开启持久存储
enabled: true
storageClass: "-"
## 存储大小30g
size: 30Gi
storageClass:
##设置成默认storageclassclass
defaultClass: true
nodeSelector:
##安装到哪个node上
kubernetes.io/hostname: instance-8x864u54
复制代码
这种安装方式安装之后
但是其实没有成功,它需要我们提供一个pv来做存储卷,并且这个pv要与安装之后自动生成pvc绑定,
所以我们需要手动提供一个pvapiVersion: v1
kind: PersistentVolume
metadata:
name: data-nfs-server-provisioner-0
spec:
capacity:
storage: 30Gi
accessModes:
- ReadWriteOnce
hostPath:
## 绑定在node上的位置
path: /data/k8s/volumes/data-nfs-server-provisioner-0
claimRef:
namespace: default
## 自动生成的pvc名字
name: data-storageclass-nfs-nfs-server-provisioner-0
复制代码
执行以下
kubectl apply -f nfs-server-pv.yml
复制代码
到这里我以为成功了,知道我将写好的es chart执行了以后一直报错
,查一下日志
kubectl describe pod elasticsearch-01 -n elasticsearch
复制代码
什么,无法挂载出去?试了很多办法都不行,直接百度了 ,无可奈何之下,我只能在每个node下执行了一下命令
yum -y install nfs-utils
systemctl restart rpcbind && systemctl enable rpcbind
systemctl restart nfs && systemctl enable nfs
复制代码
我是不想这么干的,但是不这么做数据挂载不出去,(知其然不知其所以然)
改完之后重新install了一下chart,并进行测试(网上借鉴的方案,链接
)
pvc:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
spec:
storageClassName: "nfs"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
复制代码
pv:
apiVersion: apps/v1
kind: Deployment
metadata:
name: busybox-test
labels:
app.kubernetes.io/name: busybox-deployment
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: busybox-deployment
template:
metadata:
labels:
app.kubernetes.io/name: busybox-deployment
spec:
containers:
- image: busybox
command:
- sh
- -c
- 'while true; do date > /mnt/index.html; hostname >> /mnt/index.html; sleep 5; done'
imagePullPolicy: IfNotPresent
name: busybox
volumeMounts:
- name: nfs
mountPath: "/mnt"
volumes:
- name: nfs
persistentVolumeClaim:
claimName: nfs-pvc
复制代码
使用kubectl apply -f执行一下这两个文件,结果如下:
在我选择挂载的那台node上进行寻找
可见数据已经挂载了
使用nfs-client-provisioner
nfs-server是部署一个nfs服务,然后创建pv(我是裸机使用的hostpath方式)与nfs进行绑定,所有使用nfs的storageclass的pvc锁动态创建的pv都会在这个pv下进行挂载。
而nfs-client是绑定到一个nfs服务,并创建storageclass。
我选择在一台硬盘空间比较大的云服务器上部署nfs服务
yum -y install nfs-utils
systemctl restart rpcbind && systemctl enable rpcbind
systemctl restart nfs && systemctl enable nfs
mkdir -p /data/k8s
vim /etc/exports
## 加入
/data/k8s *(rw,no_root_squash,sync)
#配置生效
exportfs -r
#查看生效
exportfs
复制代码
执行部署
helm install storageclass-nfs stable/nfs-client-provisioner -f nfs-client.yml
复制代码
nfs-client.yml
storageClass:
name: nfs
defaultClass: true
nfs:
server: ******* ##自己服务器的ip
path: /data/k8s
复制代码
over!!!!
创建es的chart
因为我用的helm3,helm2没用过,但是3特别的轻量,我在mac上直接安装了helm。
helm官网
先在本地使用命令
helm create elasticsearch
复制代码
values.yml
replicaCount: 3
image:
repository: docker.elastic.co/elasticsearch/elasticsearch:7.5.2
pullPolicy: IfNotPresent
ingress:
host: es.sudooom.com
name: es-sudooom
service:
in:
clusterIP: None
port: 9300
name: elasticsearch-in
out:
port: 9200
name: elasticsearch-out
resources:
limits:
cpu: 5
memory: 5Gi
requests:
cpu: 1
memory: 1Gi
pvc:
name: es-data
复制代码
deployment.yml
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: {{ include "elasticsearch.fullname" . }}
name: {{ include "elasticsearch.fullname" . }}
labels:
{{- include "elasticsearch.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
serviceName: {{ include "elasticsearch.name" .}}
selector:
matchLabels:
{{- include "elasticsearch.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "elasticsearch.selectorLabels" . | nindent 8 }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
resources:
{{- toYaml .Values.resources | nindent 12 }}
readinessProbe:
httpGet:
scheme: HTTP
path: /_cluster/health?local=true
port: 9200
initialDelaySeconds: 5
ports:
- containerPort: 9200
name: es-http
- containerPort: 9300
name: es-transport
volumeMounts:
- name: es-data
mountPath: /usr/share/elasticsearch/data
- name: elasticsearch-config
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
subPath: elasticsearch.yml
volumes:
- name: elasticsearch-config
configMap:
name: {{ include "elasticsearch.name" .}}
items:
- key: elasticsearch.yml
path: elasticsearch.yml
- name: es-data
persistentVolumeClaim:
claimName: {{ .Values.pvc.name}}
复制代码