“部署一个Kubernetes集群”的版本间的差异

来自Chinese Ikoula Wiki
Jump to navigation Jump to search
(建立内容为“<span data-link_translate_fr_title="Deployer un cluster Kubernetes" data-link_translate_fr_url="Deployer un cluster Kubernetes"></span>:fr:Deployer un cluster…”的新页面)
 
第1行: 第1行:
 +
<span data-link_translate_ja_title="Kubernetesクラスタのデプロイ"  data-link_translate_ja_url="Kubernetesクラスタのデプロイ"></span>[[:ja:Kubernetesクラスタのデプロイ]][[ja:Kubernetesクラスタのデプロイ]]
 
<span data-link_translate_fr_title="Deployer un cluster Kubernetes"  data-link_translate_fr_url="Deployer un cluster Kubernetes"></span>[[:fr:Deployer un cluster Kubernetes]][[fr:Deployer un cluster Kubernetes]]
 
<span data-link_translate_fr_title="Deployer un cluster Kubernetes"  data-link_translate_fr_url="Deployer un cluster Kubernetes"></span>[[:fr:Deployer un cluster Kubernetes]][[fr:Deployer un cluster Kubernetes]]
 
<br />这篇文章是从由软件进行自动翻译。你可以[[:fr:Array|看到这篇文章的源代码]]<br /><span data-translate="fr"></span><br />
 
<br />这篇文章是从由软件进行自动翻译。你可以[[:fr:Array|看到这篇文章的源代码]]<br /><span data-translate="fr"></span><br />

2021年7月29日 (四) 15:57的版本

ja:Kubernetesクラスタのデプロイ fr:Deployer un cluster Kubernetes
这篇文章是从由软件进行自动翻译。你可以看到这篇文章的源代码

de:Bereitstellen eines Kubernetes-Clusters nl:Een Kubernetes cluster implementeren pt:Implantação de um aglomerado Kubernetes es:Despliegue de un clúster Kubernetes en:Deploying a Kubernetes cluster it:Configurare un cluster Kubernetes fr:Deployer un cluster Kubernetes


什么是Kubernetes?

库伯内特斯是一个用于管理容器化工作负载和服务的开源平台。 它支持声明式的配置编写,但也支持自动化。Kubernetes 是一个庞大且快速增长的生态系统。


这个过程将使你能够快速、轻松地部署一个三节点的集群 Kubernetes (k8s)这个过程将使你能够快速和容易地从部署在同一网络中的三个CentOS 7实例中部署一个三节点的集群,在前进区。


这三个实例中的一个将是我们的主节点,其他两个将是我们的工作节点。简单地说,主节点是我们从其API中管理Kubernetes集群(容器协调器)的节点,而工作节点是将运行pod或容器(在我们的例子中是Docker)的节点。


我们将假设你的3个CentOS 7实例已经部署完毕,并且你有SSH权限来执行下面的命令。


下面是我们的例子中的配置,它将在整个过程中作为例子使用。


节点主机:"k8s-master" / 10.1.1.16
第一个节点工作者:"k8s-worker01" / 10.1.1.169
第二个节点工作者:"k8s-worker02" / 10.1.1.87


系统准备和Kubernetes安装教程

以下操作必须以root身份(或必要的sudo权限)在所有实例(master和workers)上执行。


首先,在你的每个实例上填充/etc/hosts 文件,以便它们能够解析各自的主机名(通常在虚拟路由器是DNS解析器的高级区域网络中已经是这样了)。


在我们的例子中,这在我们的三个实例上给出了以下/etc/hosts 文件(用你的实例的名称和IP来调整它)。

cat /etc/hosts
127.0.0.1   localhost
::1         localhost

10.1.1.16 k8s-master
10.1.1.169 k8s-worker01
10.1.1.87 k8s-worker02


用以下三个命令启用网桥模块和为其制定的 iptables 规则。

modprobe bridge
echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf
sysctl -p /etc/sysctl.conf


添加YUM Docker仓库。

cat <<EOF > /etc/yum.repos.d/docker.repo
[docker-ce-stable]
name=Docker CE Stable - \$basearch
baseurl=https://download.docker.com/linux/centos/7/\$basearch/stable
enabled=1
gpgcheck=1
gpgkey=https://download.docker.com/linux/centos/gpg
EOF


添加YUM Kubernetes资源库。

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
        https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF


安装Docker :

yum install -y docker-ce


然后安装必要的Kubernetes包。

yum install -y kubeadm kubelet kubectl


编辑systemd kubelet的配置文件 (/etc/systemd/system/kubelet.service.d/10-kubeadm.conf) 配置文件,在 "[Service]"部分添加以下一行。

Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"


这样,:

cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
*Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"*
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS


重新加载配置,启用,然后通过以下三个命令启动docker和kubelet服务。


systemctl daemon-reload systemctl enable docker kubelet systemctl start docker kubelet </syntaxhighlight>


禁用系统交换(kubelet不支持交换内存,如果你不禁用它,在通过kubeadms初始化集群时,你会在飞行前检查中得到一个错误)。

swapoff -a


请记得在你的每个实例的/etc/fstab 文件中也注释/删除交换行,如::

#/dev/mapper/vg01-swap  swap            swap    defaults                0       0

Kubernetes集群的初始化

以下操作只能在节点主实例上执行


通过以下命令启动Kubernetes集群的初始化,注意用主实例的IP地址修改"--apiserver-advertise-address="参数的值。

kubeadm init --apiserver-advertise-address=<ip de votre instance master> --pod-network-cidr=10.244.0.0/16

注意:请不要修改"--pod-network-cidr="参数中表示的网络ip "10.244.0.0/16",因为这个参数允许我们表示我们将使用CNI Flannel插件来管理我们pod的网络部分。


下面是集群初始化成功后,这个命令的返回值应该是这样的。

[root@k8s-master ~]# kubeadm init --apiserver-advertise-address=10.1.1.16 --pod-network-cidr=10.244.0.0/16
[init] using Kubernetes version: v1.12.2
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [k8s-master.cs437cloud.internal kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.1.1.16]
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [k8s-master.cs437cloud.internal localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [k8s-master.cs437cloud.internal localhost] and IPs [10.1.1.16 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 32.502898 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node k8s-master.cs437cloud.internal as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node k8s-master.cs437cloud.internal as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master.cs437cloud.internal" as an annotation
[bootstraptoken] using token: e83pes.u3igpccj2metetu8
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 10.1.1.16:6443 --token e83pes.u3igpccj2metetu8 --discovery-token-ca-cert-hash sha256:7ea9169bc5ac77b3a2ec37e5129006d9a895ce040e306f3093ce77e7422f7f1c


我们执行所要求的操作,以最终完成我们集群的初始化。


我们在我们的用户(在我们的例子中是root)的目录下创建一个目录和配置文件。

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config


我们为我们的集群部署了我们的pod Flannel网络。

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

注意:我们将保留由侧边初始化命令返回提供的最后一条命令("kubeadm join..."),以便以后在我们的工作实例上运行,将它们加入我们的集群。


现在我们可以从主实例对我们的集群进行第一次检查。

输入命令 "kubectl get nodes "来检查当前在你的集群中存在的节点。

[root@k8s-master ~]# kubectl get nodes
NAME                             STATUS   ROLES    AGE   VERSION
k8s-master.cs437cloud.internal   Ready    master   41m   v1.12.2

注意:目前只有你的主节点,这是正常的,因为我们还没有把其他节点添加到集群中。


输入命令 "kubectl get pods --all-namespaces "来检查当前存在于你的集群中的pods/containers。

[root@k8s-master ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                                     READY   STATUS    RESTARTS   AGE
kube-system   coredns-576cbf47c7-fwxj9                                 1/1     Running   0          41m
kube-system   coredns-576cbf47c7-t86s9                                 1/1     Running   0          41m
kube-system   etcd-k8s-master.cs437cloud.internal                      1/1     Running   0          41m
kube-system   kube-apiserver-k8s-master.cs437cloud.internal            1/1     Running   0          41m
kube-system   kube-controller-manager-k8s-master.cs437cloud.internal   1/1     Running   0          41m
kube-system   kube-flannel-ds-amd64-wcm7v                              1/1     Running   0          84s
kube-system   kube-proxy-h94bs                                         1/1     Running   0          41m
kube-system   kube-scheduler-k8s-master.cs437cloud.internal            1/1     Running   0          40m

注意:这里只有与我们的主节点所需的Kubernetes组件(kube-apiserver、etcd、kube-scheduler等)对应的pod。


我们可以用以下命令检查这些组件的状态。

[root@k8s-master ~]# kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health": "true"}

将工作节点添加到集群中

只在工作实例/节点上执行的行动

在你的每个工作实例上(不要在你的主实例上这样做),运行上面集群初始化末尾提供的 "kubeadm join ... "命令。

[root@k8s-worker01 ~]# kubeadm join 10.1.1.16:6443 --token e83pes.u3igpccj2metetu8 --discovery-token-ca-cert-hash sha256:7ea9169bc5ac77b3a2ec37e5129006d9a895ce040e306f3093ce77e7422f7f1c
[preflight] running pre-flight checks
        [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_sh ip_vs ip_vs_rr ip_vs_wrr] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

[discovery] Trying to connect to API Server "10.1.1.16:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.1.1.16:6443"
[discovery] Requesting info from "https://10.1.1.16:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.1.1.16:6443"
[discovery] Successfully established connection with API Server "10.1.1.16:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-worker01.cs437cloud.internal" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.
[root@k8s-worker02 ~]# kubeadm join 10.1.1.16:6443 --token e83pes.u3igpccj2metetu8 --discovery-token-ca-cert-hash sha256:7ea9169bc5ac77b3a2ec37e5129006d9a895ce040e306f3093ce77e7422f7f1c
[preflight] running pre-flight checks
        [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_wrr ip_vs_sh ip_vs ip_vs_rr] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
you can solve this problem with following methods:
 1. Run 'modprobe -- ' to load missing kernel modules;
2. Provide the missing builtin kernel ipvs support

[discovery] Trying to connect to API Server "10.1.1.16:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.1.1.16:6443"
[discovery] Requesting info from "https://10.1.1.16:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.1.1.16:6443"
[discovery] Successfully established connection with API Server "10.1.1.16:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-worker02.cs437cloud.internal" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

检查集群的状态

要从主实例/节点执行的行动


通过重新执行 "kubectl get nodes "命令,检查你的工作节点是否已经被添加到集群中。

[root@k8s-master ~]# kubectl get nodes
NAME                               STATUS   ROLES    AGE    VERSION
k8s-master.cs437cloud.internal     Ready    master   46m    v1.12.2
k8s-worker01.cs437cloud.internal   Ready    <none>   103s   v1.12.2
k8s-worker02.cs437cloud.internal   Ready    <none>   48s    v1.12.2

备注:我们可以看到我们的两个工作节点(k8s-worker01和k8s-worker02),所以它们已经被添加到我们的集群中。


现在让我们再次运行 "kubectl get pods --all-namespaces "命令。

[root@k8s-master ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                                     READY   STATUS    RESTARTS   AGE
kube-system   coredns-576cbf47c7-fwxj9                                 1/1     Running   0          46m
kube-system   coredns-576cbf47c7-t86s9                                 1/1     Running   0          46m
kube-system   etcd-k8s-master.cs437cloud.internal                      1/1     Running   0          46m
kube-system   kube-apiserver-k8s-master.cs437cloud.internal            1/1     Running   0          46m
kube-system   kube-controller-manager-k8s-master.cs437cloud.internal   1/1     Running   0          46m
kube-system   kube-flannel-ds-amd64-724nl                              1/1     Running   0          2m6s
kube-system   kube-flannel-ds-amd64-wcm7v                              1/1     Running   0          6m31s
kube-system   kube-flannel-ds-amd64-z7mwg                              1/1     Running   3          70s
kube-system   kube-proxy-8r7wg                                         1/1     Running   0          2m6s
kube-system   kube-proxy-h94bs                                         1/1     Running   0          46m
kube-system   kube-proxy-m2f5r                                         1/1     Running   0          70s
kube-system   kube-scheduler-k8s-master.cs437cloud.internal            1/1     Running   0          46m

注意:你可以看到,有多少个 "kube-flannel "和 "kube-proxy "荚/容器,就有多少个我们的集群中的节点。

部署第一个吊舱

我们将部署我们的第一个 豆荚在我们的Kubernetes集群中。


为了简单起见,我们选择部署一个名为 "nginx "的豆荚(没有复制),并使用 "nginx "镜像。

[root@k8s-master ~]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created


如果我们检查一下,在列出我们集群的pods的命令的返回中,这个命令出现得很好。

[root@k8s-master ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                                     READY   STATUS    RESTARTS   AGE
default       nginx-55bd7c9fd-5bghl                                    1/1     Running   0          104s
kube-system   coredns-576cbf47c7-fwxj9                                 1/1     Running   0          57m
kube-system   coredns-576cbf47c7-t86s9                                 1/1     Running   0          57m
kube-system   etcd-k8s-master.cs437cloud.internal                      1/1     Running   0          57m
kube-system   kube-apiserver-k8s-master.cs437cloud.internal            1/1     Running   0          57m
kube-system   kube-controller-manager-k8s-master.cs437cloud.internal   1/1     Running   0          57m
kube-system   kube-flannel-ds-amd64-724nl                              1/1     Running   0          13m
kube-system   kube-flannel-ds-amd64-wcm7v                              1/1     Running   0          17m
kube-system   kube-flannel-ds-amd64-z7mwg                              1/1     Running   3          12m
kube-system   kube-proxy-8r7wg                                         1/1     Running   0          13m
kube-system   kube-proxy-h94bs                                         1/1     Running   0          57m
kube-system   kube-proxy-m2f5r                                         1/1     Running   0          12m
kube-system   kube-scheduler-k8s-master.cs437cloud.internal            1/1     Running   0          57m

它出现在列表的顶部,与 "kube-system "的命名空间不同,因为它不是Kubernetes运行的特定组件。


也可以通过不使用"--all-namespace "参数执行相同的命令来避免显示特定于kube-system命名空间的pods。

[root@k8s-master ~]# kubectl get pods
NAME                      READY   STATUS    RESTARTS   AGE
nginx-55bd7c9fd-vs4fq     1/1     Running   0          3d2h


要显示标签 :

[root@k8s-master ~]# kubectl get pods --show-labels
NAME                      READY   STATUS    RESTARTS   AGE    LABELS
nginx-55bd7c9fd-ckltn     1/1     Running   0          8m2s   app=nginx,pod-template-hash=55bd7c9fd


我们还可以用以下命令检查我们的部署。

[root@k8s-master ~]# kubectl get deployments
NAME    DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx   1         1         1            1           93m


所以,我们已经部署并启动了一个nginx pod,但还不能从外部访问。为了使它能够被外部访问,我们需要通过以下命令创建服务(NodePort类型)来公开我们的pod的端口。

[root@k8s-master ~]# kubectl create service nodeport nginx --tcp=80:80
service/nginx created


我们的服务就这样产生了。

[root@k8s-master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        147m
nginx        NodePort    10.108.251.178   <none>        80:30566/TCP   20s

注意:它通过端口80/tcp进行监听,并从外部通过端口30566/tcp进行访问/曝光。


我们可以通过以下命令获得我们的pod的flannel ip和它当前运行的节点的名称。

[root@k8s-master ~]# kubectl get pods --selector="app=nginx" --output=wide
NAME                    READY   STATUS    RESTARTS   AGE    IP           NODE                               NOMINATED NODE
nginx-55bd7c9fd-vs4fq   1/1     Running   0          174m   10.244.2.2   k8s-worker02.cs437cloud.internal   <none>

这里我们的nginx pod的ip是10.244.2.2,运行在我们的节点k8s-worker02上。


你也可以简单地通过以下命令在我们的nginx pod上运行一个命令或打开一个shell(与docker命令非常相似)。

[root@k8s-master ~]# kubectl exec -it nginx-55bd7c9fd-vs4fq -- /bin/bash
root@nginx-55bd7c9fd-vs4fq:/#


你所要做的就是在你的Ikoula One Cloud网络上创建你的负载平衡规则,访问/公开你的网络服务器(nginx pod)。

- 连接到 云伊库拉一号

- 进入左侧垂直菜单中的 "网络"。

- 点击你部署Kubernetes实例的网络,然后点击 "查看IP地址",点击你的NAT源IP,然后进入 "配置 "标签。

- 点击 "负载平衡 "并创建你的规则,指定一个名称,公共端口 "80 "在我们的例子中,私人端口 "30566 "在我们的例子中(见上文),通过选择一个LB算法(例如轮流),如.NET。


Kubernetes实例


- 勾选你所有的工作者实例。


检查你的kubernetes工作者实例


测试从浏览器访问你的Web服务器/nginx pod(通过你创建LB规则的网络公共IP)。


访问你的网络服务器


事实上,你的nginx pod可以从你的任何一个节点访问,这是由 "kube-proxy "组件实现的,它负责将连接指向它所运行的节点(在复制的情况下)。


因此,你刚刚部署了一个由3个节点组成的基本Kubernetes集群,其中有一个主节点和两个工作者。

更进一步

你可以通过部署Kubernetes仪表板或为你的pod创建持久化卷,通过增加你的工作节点的数量,甚至通过冗余分配主控角色以实现高可用性,或通过将节点专用于某些组件,例如Etcd,来进一步。


这里有一些有用的链接。


https://kubernetes.io/docs/reference/kubectl/cheatsheet/

https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/

https://kubernetes.io/docs/concepts/storage/volumes/

https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/

https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/

https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/