基于Kubeadm的K8S集群部署

基于Kubeadm的K8S集群部署

Reference source:https://www.kubernetes.org.cn/5462.html

Author:https://www.zhuyongci.com

Email:zhuyongci@hotmail.com

PS:如有理解错误或写错的地方还请邮件或评论指正,谢谢!

本文基于kubernetes-1.14.7版部署,容器引擎版本为docker-ce-18.09.9

基本规划

典型架构

kubernetes_three_nodes_explan

IP地址及主机名规划

主机名IP地址备注
MY-K8S-MASTER10.0.0.70API-Server等组件
MY-K8S-NODE110.0.0.71工作节点1
MY-K8S-NODE210.0.0.72工作节点2

主机配置要求

主机CPU内存备注
MY-K8S-MASTER2或更高2GB或更高必须
MY-K8S-NODE2或更高尽量大点可选

系统基础优化

主机名设置

根据规划的主机名依次在3个节点执行

#主节点10.0.0.70
hostnamectl set-hostname MY-K8S-MASTER
#节点1
hostnamectl set-hostname MY-K8S-NODE1
#节点2
hostnamectl set-hostname MY-K8S-NODE2

本地HOSTS解析

所有节点执行

cat <<EOF >> /etc/hosts
10.0.0.70 MY-K8S-MASTER
10.0.0.71 MY-K8S-NODE1
10.0.0.72 MY-K8S-NODE2
EOF

关闭防火墙

所有节点执行,如果需要使用防火墙,则开放对应的端口。

systemctl stop firewalld && systemctl disable firewalld

关闭selinux

所有节点执行

#立即关闭
setenforce 0
#永久关闭
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux

关闭交换分区(如果为物理机的话)

所有节点执行

#立即关闭系统所有的交换分区
swapoff -a
#永久关闭,修改/etc/fstab,将交换分区的挂载信息注释掉
sed -ri 's/^UUID(.*)swap(.*)/\#UUID\1swap\2/g' /etc/fstab

配置内核参数

#添加内核参数配置文件
#内核转发及对iptables
cat > /etc/sysctl.d/k8s_network.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
#立即生效
sysctl --system

YUM仓库配置

首先备份本地的两个基础源

gzip /etc/yum.repos.d/CentOS-Base.repo
gzip /etc/yum.repos.d/epel.repo

软件源优化为国内的仓库地址

#两个基础仓库
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
#配置docker仓库
curl -o /etc/yum.repos.d/docker-ce.repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
#先安装epel的GPG-KEY,省得后期YUM报错NO KEY
yum install epel-release -y
#安装一些需要的工具
yum install yum-utils device-mapper-persistent-data lvm2 -y
#添加kubernetes的国内仓库
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
#清理本地的YUM缓存
yum clean all
#生成缓存(非必须)
yum makecache fast
#中间如果提示要导入KEY请选择yes

开始部署

安装docker社区版

所有节点执行

#docker-18的最后一个版本
yum install docker-ce-18.09.9 -y

配置docker

所有节点执行

配置docker的引擎的cgroup的驱动,我们使用systemd来对容器进程进行资源限制及管理。打开或新增daemon.json配置文件

#打开文件
vim /etc/docker/daemon.json
#也包含了仓库优化,加入了国内的镜像加速服务器的地址,后期如果pull镜像会先走国内

粘贴下面的配置,请注意json的语法,此处使用了道云提供的国内的docker镜像加速服务,类似的还有阿里云网易等。

{
    "exec-opts": ["native.cgroupdriver=systemd"],
    "registry-mirrors": ["http://f1361db2.m.daocloud.io"]
}

启动docker

所有节点执行

systemctl start docker && systemctl enable docker

安装Kubernetes基础工具

kubelet:容器的生命周期的管理,通过调取DockerAPI来进行容器的管理(创建、修改、删除等),受API-SERVER控制。

kubeadm:容器化部署K8S集群的工具,降低了部署的复杂度。

kubectl:集群命令行控制工具,资源的管理(创建、修改、删除等),该工具主要与API-SERVER交互。

所有节点执行以下的命令

#kubernetes-1.14的最后一个版本
yum install kubelet-1.14.7 kubeadm-1.14.7 kubectl-1.14.7 -y
#kubelet加入开机自启
systemctl enable kubelet
#注:现在不要启动kubelet,其实也根本起不来

部署API-Server节点

MY-K8S-MASTER节点执行下面的操作

  1. 初始化K8S集群

    #参数解释:
    #--kubernetes-version:						版本声明
    #--apiserver-advertise-address:				指定api-server的IP地址
    #--apiserver-advertise-address:				从国内拉取镜像,必须指定,否则镜像拉不下来
    #--service-cidr:							定义VIP地址范围
    #--pod-network-cidr:						定义POD使用的网段,与flannel中的网段对应
    kubeadm init --kubernetes-version=1.14.7 --apiserver-advertise-address=10.0.0.70 --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16
    
  2. 成功后会在最后输出访问的token等信息,如图

    kubeadm_init1

    最后的输出信息明确的告诉我们,如何管理集群以及如何把node节点加入K8S集群。

  3. 记录token信息,后期增加节点都将使用该信息(不执行,稍后在NODE节点上执行)

    #token是唯一的,所以每个人的都不一样
    kubeadm join 10.0.0.70:6443 --token uw7o6r.sbzs7hc4hxwhnxmc \
        --discovery-token-ca-cert-hash sha256:cca7fd3b83eb3313ebc493ac1675e8678bf73ff2af503eaee9ca11295773756e 
    
  4. 配置kubectl工具,用于管理集群

    #这里复制的kubeadm的最后输出部分
    #备注: 待添加
    mkdir -p $HOME/.kube
    cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    chown $(id -u):$(id -g) $HOME/.kube/config
    #使用kubectl检查状态
    kubectl get nodes
    #目前未配置网络插件(flannel)主节点会显示NoReady状态
    #执行完下面的第5步之后即可
    
  5. 部署flannel网络插件,用于提供容器跨宿主机通信的支持

    项目地址:https://github.com/coreos/flannel

    #此处有巨坑,需要替换flannel的默认地址
    #此步骤你可以跳过,你可以直接复制我的kube-flannel.yml文件的内容
    wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    #替换为阿里的域名
    sed -i 's/quay.io\/coreos/registry.cn-beijing.aliyuncs.com\/imcto/g' kube-flannel.yml
    #注: 可以直接使用下面我修改好的配置文件
    

    修改flannel的配置,以下kube-flannel.yml文件内容为官方原配置修改而来,这里的配置包含了不同系统平台的相关参数,这里我们修改了所有的仓库地址,由于我们的系统平台是arm64平台的,所以还对网卡接口配置了一些参数。如果你的node节点资源包含arm64或其它系统平台的类型,也可以针对目标平台做针对性的配置。(注意:多网卡环境需要指定桥接的网卡的接口名称)

    #File: kube-flannel.yml
    ---
    apiVersion: policy/v1beta1
    kind: PodSecurityPolicy
    metadata:
      name: psp.flannel.unprivileged
      annotations:
        seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
        seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
        apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
        apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
    spec:
      privileged: false
      volumes:
        - configMap
        - secret
        - emptyDir
        - hostPath
      allowedHostPaths:
        - pathPrefix: "/etc/cni/net.d"
        - pathPrefix: "/etc/kube-flannel"
        - pathPrefix: "/run/flannel"
      readOnlyRootFilesystem: false
      # Users and groups
      runAsUser:
        rule: RunAsAny
      supplementalGroups:
        rule: RunAsAny
      fsGroup:
        rule: RunAsAny
      # Privilege Escalation
      allowPrivilegeEscalation: false
      defaultAllowPrivilegeEscalation: false
      # Capabilities
      allowedCapabilities: ['NET_ADMIN']
      defaultAddCapabilities: []
      requiredDropCapabilities: []
      # Host namespaces
      hostPID: false
      hostIPC: false
      hostNetwork: true
      hostPorts:
      - min: 0
        max: 65535
      # SELinux
      seLinux:
        # SELinux is unsed in CaaSP
        rule: 'RunAsAny'
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: flannel
    rules:
      - apiGroups: ['extensions']
        resources: ['podsecuritypolicies']
        verbs: ['use']
        resourceNames: ['psp.flannel.unprivileged']
      - apiGroups:
          - ""
        resources:
          - pods
        verbs:
          - get
      - apiGroups:
          - ""
        resources:
          - nodes
        verbs:
          - list
          - watch
      - apiGroups:
          - ""
        resources:
          - nodes/status
        verbs:
          - patch
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: flannel
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: flannel
    subjects:
    - kind: ServiceAccount
      name: flannel
      namespace: kube-system
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: flannel
      namespace: kube-system
    ---
    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: kube-flannel-cfg
      namespace: kube-system
      labels:
        tier: node
        app: flannel
    data:
    #修改下面的参数可定义默认使用的网段
      cni-conf.json: |
        {
          "name": "cbr0",
          "cniVersion": "0.3.1",
          "plugins": [
            {
              "type": "flannel",
              "delegate": {
                "hairpinMode": true,
                "isDefaultGateway": true
              }
            },
            {
              "type": "portmap",
              "capabilities": {
                "portMappings": true
              }
            }
          ]
        }
      net-conf.json: |
        {
          "Network": "10.244.0.0/16",
          "Backend": {
            "Type": "vxlan"
          }
        }
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: kube-flannel-ds-amd64
      namespace: kube-system
      labels:
        tier: node
        app: flannel
    spec:
      selector:
        matchLabels:
          app: flannel
      template:
        metadata:
          labels:
            tier: node
            app: flannel
        spec:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                  - matchExpressions:
                      - key: beta.kubernetes.io/os
                        operator: In
                        values:
                          - linux
                      - key: beta.kubernetes.io/arch
                        operator: In
                        values:
                          - amd64
          hostNetwork: true
          tolerations:
          - operator: Exists
            effect: NoSchedule
          serviceAccountName: flannel
          initContainers:
          - name: install-cni
            image: registry.cn-beijing.aliyuncs.com/imcto/flannel:v0.11.0-amd64
            command:
            - cp
            args:
            - -f
            - /etc/kube-flannel/cni-conf.json
            - /etc/cni/net.d/10-flannel.conflist
            volumeMounts:
            - name: cni
              mountPath: /etc/cni/net.d
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          containers:
          - name: kube-flannel
            image: registry.cn-beijing.aliyuncs.com/imcto/flannel:v0.11.0-amd64
            command:
            - /opt/bin/flanneld
            args:
            - --ip-masq
            - --kube-subnet-mgr
            #如果有多块网卡,需要添加网卡接口参数,默认加上也没关系
            #有些系统未作优化,可能名称为ens33之类
            - --iface=eth0
            resources:
              requests:
                cpu: "100m"
                memory: "50Mi"
              limits:
                cpu: "100m"
                memory: "50Mi"
            securityContext:
              privileged: false
              capabilities:
                 add: ["NET_ADMIN"]
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            volumeMounts:
            - name: run
              mountPath: /run/flannel
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          volumes:
            - name: run
              hostPath:
                path: /run/flannel
            - name: cni
              hostPath:
                path: /etc/cni/net.d
            - name: flannel-cfg
              configMap:
                name: kube-flannel-cfg
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: kube-flannel-ds-arm64
      namespace: kube-system
      labels:
        tier: node
        app: flannel
    spec:
      selector:
        matchLabels:
          app: flannel
      template:
        metadata:
          labels:
            tier: node
            app: flannel
        spec:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                  - matchExpressions:
                      - key: beta.kubernetes.io/os
                        operator: In
                        values:
                          - linux
                      - key: beta.kubernetes.io/arch
                        operator: In
                        values:
                          - arm64
          hostNetwork: true
          tolerations:
          - operator: Exists
            effect: NoSchedule
          serviceAccountName: flannel
          initContainers:
          - name: install-cni
            image: registry.cn-beijing.aliyuncs.com/imcto/flannel:v0.11.0-arm64
            command:
            - cp
            args:
            - -f
            - /etc/kube-flannel/cni-conf.json
            - /etc/cni/net.d/10-flannel.conflist
            volumeMounts:
            - name: cni
              mountPath: /etc/cni/net.d
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          containers:
          - name: kube-flannel
            image: registry.cn-beijing.aliyuncs.com/imcto/flannel:v0.11.0-arm64
            command:
            - /opt/bin/flanneld
            args:
            - --ip-masq
            - --kube-subnet-mgr
            resources:
              requests:
                cpu: "100m"
                memory: "50Mi"
              limits:
                cpu: "100m"
                memory: "50Mi"
            securityContext:
              privileged: false
              capabilities:
                 add: ["NET_ADMIN"]
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            volumeMounts:
            - name: run
              mountPath: /run/flannel
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          volumes:
            - name: run
              hostPath:
                path: /run/flannel
            - name: cni
              hostPath:
                path: /etc/cni/net.d
            - name: flannel-cfg
              configMap:
                name: kube-flannel-cfg
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: kube-flannel-ds-arm
      namespace: kube-system
      labels:
        tier: node
        app: flannel
    spec:
      selector:
        matchLabels:
          app: flannel
      template:
        metadata:
          labels:
            tier: node
            app: flannel
        spec:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                  - matchExpressions:
                      - key: beta.kubernetes.io/os
                        operator: In
                        values:
                          - linux
                      - key: beta.kubernetes.io/arch
                        operator: In
                        values:
                          - arm
          hostNetwork: true
          tolerations:
          - operator: Exists
            effect: NoSchedule
          serviceAccountName: flannel
          initContainers:
          - name: install-cni
            image: registry.cn-beijing.aliyuncs.com/imcto/flannel:v0.11.0-arm
            command:
            - cp
            args:
            - -f
            - /etc/kube-flannel/cni-conf.json
            - /etc/cni/net.d/10-flannel.conflist
            volumeMounts:
            - name: cni
              mountPath: /etc/cni/net.d
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          containers:
          - name: kube-flannel
            image: registry.cn-beijing.aliyuncs.com/imcto/flannel:v0.11.0-arm
            command:
            - /opt/bin/flanneld
            args:
            - --ip-masq
            - --kube-subnet-mgr
            resources:
              requests:
                cpu: "100m"
                memory: "50Mi"
              limits:
                cpu: "100m"
                memory: "50Mi"
            securityContext:
              privileged: false
              capabilities:
                 add: ["NET_ADMIN"]
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            volumeMounts:
            - name: run
              mountPath: /run/flannel
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          volumes:
            - name: run
              hostPath:
                path: /run/flannel
            - name: cni
              hostPath:
                path: /etc/cni/net.d
            - name: flannel-cfg
              configMap:
                name: kube-flannel-cfg
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: kube-flannel-ds-ppc64le
      namespace: kube-system
      labels:
        tier: node
        app: flannel
    spec:
      selector:
        matchLabels:
          app: flannel
      template:
        metadata:
          labels:
            tier: node
            app: flannel
        spec:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                  - matchExpressions:
                      - key: beta.kubernetes.io/os
                        operator: In
                        values:
                          - linux
                      - key: beta.kubernetes.io/arch
                        operator: In
                        values:
                          - ppc64le
          hostNetwork: true
          tolerations:
          - operator: Exists
            effect: NoSchedule
          serviceAccountName: flannel
          initContainers:
          - name: install-cni
            image: registry.cn-beijing.aliyuncs.com/imcto/flannel:v0.11.0-ppc64le
            command:
            - cp
            args:
            - -f
            - /etc/kube-flannel/cni-conf.json
            - /etc/cni/net.d/10-flannel.conflist
            volumeMounts:
            - name: cni
              mountPath: /etc/cni/net.d
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          containers:
          - name: kube-flannel
            image: registry.cn-beijing.aliyuncs.com/imcto/flannel:v0.11.0-ppc64le
            command:
            - /opt/bin/flanneld
            args:
            - --ip-masq
            - --kube-subnet-mgr
            resources:
              requests:
                cpu: "100m"
                memory: "50Mi"
              limits:
                cpu: "100m"
                memory: "50Mi"
            securityContext:
              privileged: false
              capabilities:
                 add: ["NET_ADMIN"]
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            volumeMounts:
            - name: run
              mountPath: /run/flannel
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          volumes:
            - name: run
              hostPath:
                path: /run/flannel
            - name: cni
              hostPath:
                path: /etc/cni/net.d
            - name: flannel-cfg
              configMap:
                name: kube-flannel-cfg
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: kube-flannel-ds-s390x
      namespace: kube-system
      labels:
        tier: node
        app: flannel
    spec:
      selector:
        matchLabels:
          app: flannel
      template:
        metadata:
          labels:
            tier: node
            app: flannel
        spec:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                  - matchExpressions:
                      - key: beta.kubernetes.io/os
                        operator: In
                        values:
                          - linux
                      - key: beta.kubernetes.io/arch
                        operator: In
                        values:
                          - s390x
          hostNetwork: true
          tolerations:
          - operator: Exists
            effect: NoSchedule
          serviceAccountName: flannel
          initContainers:
          - name: install-cni
            image: registry.cn-beijing.aliyuncs.com/imcto/flannel:v0.11.0-s390x
            command:
            - cp
            args:
            - -f
            - /etc/kube-flannel/cni-conf.json
            - /etc/cni/net.d/10-flannel.conflist
            volumeMounts:
            - name: cni
              mountPath: /etc/cni/net.d
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          containers:
          - name: kube-flannel
            image: registry.cn-beijing.aliyuncs.com/imcto/flannel:v0.11.0-s390x
            command:
            - /opt/bin/flanneld
            args:
            - --ip-masq
            - --kube-subnet-mgr
            resources:
              requests:
                cpu: "100m"
                memory: "50Mi"
              limits:
                cpu: "100m"
                memory: "50Mi"
            securityContext:
              privileged: false
              capabilities:
                 add: ["NET_ADMIN"]
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            volumeMounts:
            - name: run
              mountPath: /run/flannel
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          volumes:
            - name: run
              hostPath:
                path: /run/flannel
            - name: cni
              hostPath:
                path: /etc/cni/net.d
            - name: flannel-cfg
              configMap:
                name: kube-flannel-cfg
    

    保存改文件后,可以直接执行(网络状态好的话)

    kubectl apply -f kube-flannel.yml
    

    PS: 建议先pull镜像下来,否则根本不知道pull成功没有,我在这坑里蹲了很久......😂

    #这里针对的是网络不好的情况下的方法
    #手动pull镜像下来,便于观察是否pull成功
    docker pull registry.cn-beijing.aliyuncs.com/imcto/flannel:v0.11.0-amd64
    #pull成功后再执行
    kubectl apply -f kube-flannel.yml
    
  6. 查看MY-K8S-MASTER节点的API-Server是否已准备好

    kubectl get nodes
    

    如图,master节点已被发现

    master_ready

部署Node节点服务

在所有node节点执行下面的命令

kubeadm join 10.0.0.70:6443 --token uw7o6r.sbzs7hc4hxwhnxmc \
    --discovery-token-ca-cert-hash sha256:cca7fd3b83eb3313ebc493ac1675e8678bf73ff2af503eaee9ca11295773756e 

等待几分钟的时间后,登陆到MY-K8S-MASTER节点,执行下面的命令,以验证两个NODE节点是否被发现

#在master节点执行
kubectl get nodes

如果有下面的输出,则您的节点被加入到了集群内,此时我们的集群已经可用

nodes_ready

注: 使用kubeadm reset可以清除已有的集群配置信息,请谨慎使用!

创建测试项目

我们以nginx为例创建一个K8Sdeployment资源,创建nginx_dep.yaml文件,粘贴下面的内容

apiVersion: extensions/v1beta1
#资源类型声明
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  #副本数量
  replicas: 3
  template:
    metadata:
      labels:
      #标签定义,通过标签可关联svc资源
        app: nginx
    spec:
    #容器定义
      containers:
      - name: nginx
        image: nginx:latest
        #暴露的端口
        ports:
        - containerPort: 80
        #资源限制选项
        resources:
          limits:
            cpu: 100m
          requests:
            cpu: 100m

创建Service资源,新增nginx_svc.yaml文件,粘贴如下内容

apiVersion: v1
kind: Service
metadata:
  name: myweb
spec:
  type: NodePort
  ports:
    - port: 80
    #对外暴露的端口,通过任意节点的IP:30080即可访问容器服务
      nodePort: 30080
      targetPort: 80
  selector:
  #标签选择器,关联deployment
    app: nginx

将以上文件保存到独立的目录,以表明该文件是同一个项目的,具有关联性,创建资源执行下面的命令

#创建nginx的Deployment资源
kubectl create -f nginx_dep.yaml
#创建nginx的Service资源
kubectl create -f nginx_svc.yaml

稍等片刻,可打开浏览器访问http://10.0.0.71:30080即可看到nginx的测试页面。集群内的任意节点地址均可以访问到。

nginx_deployment_test

部署dashboard

项目地址:https://github.com/kubernetes/dashboard

参考链接:https://blog.csdn.net/Excairun/article/details/88989706

该项目为kubernetes官方项目,其它类似的还有VMware公司的octant

主要为各种kubernetes资源的创建提供一个友好的GUI环境,当然,如果熟悉开发的话可尝试调用kubernetesAPI接口来开发一套公司内部的自动化平台。

待更新......

评论

Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×