KubeSphere0:Kubenetes上安装KubeSphere(原生)

KubeSphere0:Kubenetes上安装KubeSphere(原生)

1. 需求说明

1.1 安装前提示

两个容易出错的地方
  • devops
  • 边缘计算

1.2 配置

我的环境如下,我在虚拟机VM中测试,有些小慢。🤣
  • Docker01:4核4G
  • Docker02:4核4G
  • Docker03:4核4G
  • Docker04:4核4G
 
推荐配置,3个机器完全可以模拟。
  • Master:4核8G
  • Node1:8核16G
  • Node2:8核16G

2. 前置环境安装

我在前面已经安装Kubenetes
  • Docker安装
  • Kubernetes安装
    • kubelet
    • kubeadm
    • kubectl

2.1 配置动态供应默认存储类

  • 以前是PV池静态供应,PV池动态供应

2.1.1 创建都PV池动态供应

  1. 创建文件sc.yaml,注意修改server值、value值的两个IP
## 创建了一个存储类 apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: nfs-storage annotations: storageclass.kubernetes.io/is-default-class: "true" provisioner: k8s-sigs.io/nfs-subdir-external-provisioner parameters: archiveOnDelete: "true" ## 删除pv的时候,pv的内容是否要备份 --- apiVersion: apps/v1 kind: Deployment metadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default spec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2 # resources: # limits: # cpu: 10m # requests: # cpu: 10m volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: k8s-sigs.io/nfs-subdir-external-provisioner - name: NFS_SERVER value: 192.168.92.138 ## 指定自己nfs服务器地址 - name: NFS_PATH value: /nfs/data ## nfs服务器共享的目录 volumes: - name: nfs-client-root nfs: server: 192.168.92.138 ## 指定自己服务器地址 path: /nfs/data --- apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client-provisioner-runner rules: - apiGroups: [""] resources: ["nodes"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default rules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default roleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io
  1. 运行文件
# 1. 创建文件 root@docker01:/home# vim sc.yaml # 2. 运行yaml文件,创建动态供应 root@docker01:/home# kubectl apply -f sc.yaml storageclass.storage.k8s.io/nfs-storage created deployment.apps/nfs-client-provisioner created serviceaccount/nfs-client-provisioner created clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created # 查看默认存储,生成了存储目录 root@docker01:/home# kubectl get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE nfs-storage (default) k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 100s root@docker01:/home#
  1. 写一个申请书测试
创建文件pvc.yaml ,写入
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: nginx-pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 200Mi
  1. 运行查看
# 1. 运行 root@docker01:/home# kubectl apply -f pvc.yaml persistentvolumeclaim/nginx-pvc created # 2. 查看pvc,发现是bound绑定状态 root@docker01:/home# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nginx-pvc Bound pvc-dec83dc6-d623-4296-a633-05dd61936de3 200Mi RWX nfs-storage 77s # 查看pv,自动生成了一个pvc root@docker01:/home# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-dec83dc6-d623-4296-a633-05dd61936de3 200Mi RWX Delete Bound default/nginx-pvc nfs-storage 2m31s root@docker01:/home#
 

2.2 metrics-server

集群知道监控组件
创建metrics.yaml 文件,直接复制即可。
apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: k8s-app: metrics-server rbac.authorization.k8s.io/aggregate-to-admin: "true" rbac.authorization.k8s.io/aggregate-to-edit: "true" rbac.authorization.k8s.io/aggregate-to-view: "true" name: system:aggregated-metrics-reader rules: - apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: k8s-app: metrics-server name: system:metrics-server rules: - apiGroups: - "" resources: - pods - nodes - nodes/stats - namespaces - configmaps verbs: - get - list - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: labels: k8s-app: metrics-server name: metrics-server-auth-reader namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: extension-apiserver-authentication-reader subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: k8s-app: metrics-server name: metrics-server:system:auth-delegator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:auth-delegator subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: k8s-app: metrics-server name: system:metrics-server roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-server subjects: - kind: ServiceAccount name: metrics-server namespace: kube-system --- apiVersion: v1 kind: Service metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system spec: ports: - name: https port: 443 protocol: TCP targetPort: https selector: k8s-app: metrics-server --- apiVersion: apps/v1 kind: Deployment metadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system spec: selector: matchLabels: k8s-app: metrics-server strategy: rollingUpdate: maxUnavailable: 0 template: metadata: labels: k8s-app: metrics-server spec: containers: - args: - --cert-dir=/tmp - --kubelet-insecure-tls - --secure-port=4443 - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --kubelet-use-node-status-port image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/metrics-server:v0.4.3 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /livez port: https scheme: HTTPS periodSeconds: 10 name: metrics-server ports: - containerPort: 4443 name: https protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: https scheme: HTTPS periodSeconds: 10 securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1000 volumeMounts: - mountPath: /tmp name: tmp-dir nodeSelector: kubernetes.io/os: linux priorityClassName: system-cluster-critical serviceAccountName: metrics-server volumes: - emptyDir: {} name: tmp-dir --- apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: labels: k8s-app: metrics-server name: v1beta1.metrics.k8s.io spec: group: metrics.k8s.io groupPriorityMinimum: 100 insecureSkipTLSVerify: true service: name: metrics-server namespace: kube-system version: v1beta1 versionPriority: 100
# 1. 创建 root@docker01:/home# vim metrics.yaml # 2. 运行 root@docker01:/home# kubectl apply -f metrics.yaml serviceaccount/metrics-server created clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created clusterrole.rbac.authorization.k8s.io/system:metrics-server created rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created service/metrics-server created deployment.apps/metrics-server created apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created # 查看是否启动 root@docker01:/home# kubectl get pod -A
 
安装成功使用
# 查看CPU、内存使用情况 root@docker01:/home# kubectl top nodes NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% docker01 453m 11% 2455Mi 64% docker02 248m 6% 1350Mi 35% docker03 196m 4% 1897Mi 49% docker04 182m 4% 1478Mi 38% # 查看Pod使用情况 root@docker01:/home# kubectl top pods -A NAMESPACE NAME CPU(cores) MEMORY(bytes) default nfs-client-provisioner-89f7c5494-9l54d 3m 6Mi ingress-nginx ingress-nginx-controller-67597d74d7-7r98w 3m 128Mi kube-system calico-kube-controllers-5d995d45d6-jkz4g 2m 27Mi kube-system calico-node-56j9z 43m 135Mi kube-system calico-node-gpr7g 50m 142Mi kube-system calico-node-r96l8 64m 141Mi kube-system calico-node-vtds6 48m 144Mi kube-system coredns-7d89d9b6b8-hr2jl 4m 17Mi kube-system coredns-7d89d9b6b8-mpdbb 4m 17Mi kube-system etcd-docker01 35m 67Mi kube-system kube-apiserver-docker01 118m 346Mi kube-system kube-controller-manager-docker01 31m 69Mi kube-system kube-proxy-29f94 1m 19Mi kube-system kube-proxy-4tp47 1m 28Mi kube-system kube-proxy-86tvb 1m 16Mi kube-system kube-proxy-q6lkg 1m 15Mi kube-system kube-scheduler-docker01 7m 28Mi kube-system metrics-server-567b88cd57-pk6d8 7m 14Mi kubernetes-dashboard dashboard-metrics-scraper-c45b7869d-92gxz 1m 5Mi kubernetes-dashboard kubernetes-dashboard-576cb95f94-xgftv 1m 34Mi root@docker01:/home#

3. 安装

3.1 下载文件

# 1. 下载配置文件,类似安装器 wget https://github.com/kubesphere/ks-installer/releases/download/v3.2.0/kubesphere-installer.yaml # 2. 下载集群的配置文件 wget https://github.com/kubesphere/ks-installer/releases/download/v3.2.0/cluster-configuration.yaml

3.2 修改集群配置文件

在 cluster-configuration.yaml中指定我们需要开启的功能
参照官网“启用可插拔组件”

3.2.1 参数说明

修改cluster-configuration.yaml 内容,用到那个功能将false改为true 打开即可。
  • 监控功能
notion image
  • redis功能
  • 轻量级目录协议
notion image
  • 系统告诫功能
notion image
  • 审计功能
notion image
  • devops功能
notion image
  • 集群的事件功能
notion image
  • 日志功能
notion image
  • 网络策略
notion image
  • ippool:none—>calico
notion image
  • 应用商店
  • 微服务治理功能
notion image
  • kubeedge边缘计算
    • 如下图两种方式设置仍然报错,启动kubeedge失败
    • 由于目前没涉及边缘计算,我这里还是关闭了这个功能😔
notion image
notion image
  • Nov 24, 2021 目前时间,存在kubernetes version >= 1.22,安装不了。参见:
实测kubernetes version=1.21.5可以使用。

3.3 安装

3.3.1 安装

# 1. 安装kubesphere安装器 kubectl apply -f kubesphere-installer.yaml # 2. 安装kubesphere集群 kubectl apply -f cluster-configuration.yaml
notion image

3.3.2 查看

root@docker01:/home# kubectl get pod -A
notion image

3.3.3 检查安装日志

查看集群安装到哪一步了,安装的比较慢,15-20分钟左右,耐心等待😅
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
如果成功,返回如下信息
notion image
 

补充:问题排查

  • devops启动失败,发现是证书的问题,
notion image

解决

具体看4. 排查 部分。Tips📢:修改后需要重建devops pod才能生效。

4. 排查

登陆之前最好确定所有的pod都运行。

4.1 排查

# 1. 查看pod kubectl get pod -A
查看没启动起来的Pod
notion image

4.2 查看具体的pod

查看描述的详细信息
# 查看具体的pod kubectl describe pod -n pod名称

4.1.1 显示内容情况1

显示正在拉取远程镜像,说明下载镜像慢而已
notion image

4.1.1 显示内容情况2

  • 检查有一个问题
kubectl describe pod -n kubesphere-monitoring-system prometheus-k8s-0
notion image
挂载失败,有一个密钥没有找到。解决etcd监控证书找不到问题。
解决运行如下命令,创建证书

创建证书(这一步很重要)

kubectl -n kubesphere-monitoring-system create secret generic kube-etcd-client-certs --from-file=etcd-client-ca.crt=/etc/kubernetes/pki/etcd/ca.crt --from-file=etcd-client.crt=/etc/kubernetes/pki/apiserver-etcd-client.crt --from-file=etcd-client.key=/etc/kubernetes/pki/apiserver-etcd-client.key
使用后最好重新启动机器。不过已经可以正常使用了,只是检查安装日志 这一步不会更新。

检查安装日志

kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f

5. 访问

登陆之前最好确定所有的pod都运行。
访问任意机器的30880端口,192.168.92.138:30880
账号 : admin
密码 : P@88w0rd
进入提示需改密码
  • Root123
notion image
 
 

6. 官方体验

  • 账号:demo1
  • 密码:Demo23