Ikoula Corporate
Espace client
Support
Blog
Wiki
Site web Ikoula
查看“部署一个Kubernetes集群”的源代码
←
部署一个Kubernetes集群
Jump to navigation
Jump to search
因为以下原因,您没有权限编辑本页:
您所请求的操作仅限于该用户组的用户使用:
用户
您可以查看和复制此页面的源代码。
<span data-link_translate_pl_title="Wdrażanie klastra Kubernetes" data-link_translate_pl_url="Wdrażanie klastra Kubernetes"></span>[[:pl:Wdrażanie klastra Kubernetes]][[pl:Wdrażanie klastra Kubernetes]] <span data-link_translate_ja_title="Kubernetesクラスタのデプロイ" data-link_translate_ja_url="Kubernetesクラスタのデプロイ"></span>[[:ja:Kubernetesクラスタのデプロイ]][[ja:Kubernetesクラスタのデプロイ]] <span data-link_translate_fr_title="Deployer un cluster Kubernetes" data-link_translate_fr_url="Deployer un cluster Kubernetes"></span>[[:fr:Deployer un cluster Kubernetes]][[fr:Deployer un cluster Kubernetes]] <br />这篇文章是从由软件进行自动翻译。你可以[[:fr:Array|看到这篇文章的源代码]]<br /><span data-translate="fr"></span><br /> <span data-link_translate_de_title="Bereitstellen eines Kubernetes-Clusters" data-link_translate_de_url="Bereitstellen eines Kubernetes-Clusters"></span>[[:de:Bereitstellen eines Kubernetes-Clusters]][[de:Bereitstellen eines Kubernetes-Clusters]] <span data-link_translate_nl_title="Een Kubernetes cluster implementeren" data-link_translate_nl_url="Een Kubernetes cluster implementeren"></span>[[:nl:Een Kubernetes cluster implementeren]][[nl:Een Kubernetes cluster implementeren]] <span data-link_translate_pt_title="Implantação de um aglomerado Kubernetes" data-link_translate_pt_url="Implantação de um aglomerado Kubernetes"></span>[[:pt:Implantação de um aglomerado Kubernetes]][[pt:Implantação de um aglomerado Kubernetes]] <span data-link_translate_es_title="Despliegue de un clúster Kubernetes" data-link_translate_es_url="Despliegue de un clúster Kubernetes"></span>[[:es:Despliegue de un clúster Kubernetes]][[es:Despliegue de un clúster Kubernetes]] <span data-link_translate_en_title="Deploying a Kubernetes cluster" data-link_translate_en_url="Deploying a Kubernetes cluster"></span>[[:en:Deploying a Kubernetes cluster]][[en:Deploying a Kubernetes cluster]] <span data-link_translate_it_title="Configurare un cluster Kubernetes" data-link_translate_it_url="Configurare un cluster Kubernetes"></span>[[:it:Configurare un cluster Kubernetes]][[it:Configurare un cluster Kubernetes]] <span data-link_translate_fr_title="Deployer un cluster Kubernetes" data-link_translate_fr_url="Deployer un cluster Kubernetes"></span>[[:fr:Deployer un cluster Kubernetes]][[fr:Deployer un cluster Kubernetes]] {{#seo: |title=kubernetes安装 |titlemode=replace |keywords=ikoula wiki, ikoula wiki, ikoula知识库, kubernetes是什么, kubernetes安装, kubernetes教程 |description=Kubernetes可以被认为是:一个容器平台、一个微服务平台、一个便携式云平台等等......了解如何安装它。 |og:type=article |og:image=https://fr-wiki.ikoula.com/resources/assets/logo_ikwiki.png }} ==什么是Kubernetes?== '''库伯内特斯'''是一个用于管理容器化工作负载和服务的开源平台。 它支持声明式的配置编写,但也支持自动化。''Kubernetes'' 是一个庞大且快速增长的生态系统。 这个过程将使你能够快速、轻松地部署一个三节点的集群 [https://www.ikoula.com/fr/cloud-public/oneclick Kubernetes (k8s)]这个过程将使你能够快速和容易地从部署在同一网络中的三个CentOS 7实例中部署一个三节点的集群,在前进区。 这三个实例中的一个将是我们的主节点,其他两个将是我们的工作节点。简单地说,主节点是我们从其API中管理Kubernetes集群(容器协调器)的节点,而工作节点是将运行pod或容器(在我们的例子中是Docker)的节点。 我们将假设你的3个CentOS 7实例已经部署完毕,并且你有SSH权限来执行下面的命令。 下面是我们的例子中的配置,它将在整个过程中作为例子使用。 节点主机:"k8s-master" / 10.1.1.16<br> 第一个节点工作者:"k8s-worker01" / 10.1.1.169<br> 第二个节点工作者:"k8s-worker02" / 10.1.1.87<br> ==系统准备和Kubernetes安装教程 == 以下操作必须以root身份(或必要的sudo权限)在所有实例(master和workers)上执行。 首先,在你的每个实例上填充/etc/hosts 文件,以便它们能够解析各自的主机名(通常在虚拟路由器是DNS解析器的高级区域网络中已经是这样了)。 在我们的例子中,这在我们的三个实例上给出了以下/etc/hosts 文件(用你的实例的名称和IP来调整它)。 <syntaxhighlight lang="bash"> cat /etc/hosts 127.0.0.1 localhost ::1 localhost 10.1.1.16 k8s-master 10.1.1.169 k8s-worker01 10.1.1.87 k8s-worker02 </syntaxhighlight> 用以下三个命令启用网桥模块和为其制定的 iptables 规则。 <syntaxhighlight lang="bash"> modprobe bridge echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf sysctl -p /etc/sysctl.conf </syntaxhighlight> 添加YUM Docker仓库。 <syntaxhighlight lang="bash"> cat <<EOF > /etc/yum.repos.d/docker.repo [docker-ce-stable] name=Docker CE Stable - \$basearch baseurl=https://download.docker.com/linux/centos/7/\$basearch/stable enabled=1 gpgcheck=1 gpgkey=https://download.docker.com/linux/centos/gpg EOF </syntaxhighlight> 添加YUM Kubernetes资源库。 <syntaxhighlight lang="bash"> cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF </syntaxhighlight> 安装Docker : <syntaxhighlight lang="bash"> yum install -y docker-ce </syntaxhighlight> 然后安装必要的Kubernetes包。 <syntaxhighlight lang="bash"> yum install -y kubeadm kubelet kubectl </syntaxhighlight> 编辑systemd kubelet的配置文件 (/etc/systemd/system/kubelet.service.d/10-kubeadm.conf) 配置文件,在 "[Service]"部分添加以下一行。 <pre> Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs" </pre> 这样,: <pre> cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf # Note: This dropin only works with kubeadm and kubelet v1.11+ [Service] Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf" Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml" *Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"* # This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env # This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use # the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file. EnvironmentFile=-/etc/sysconfig/kubelet ExecStart= ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS </pre> 重新加载配置,启用,然后通过以下三个命令启动docker和kubelet服务。 systemctl daemon-reload systemctl enable docker kubelet systemctl start docker kubelet </syntaxhighlight> 禁用系统交换(kubelet不支持交换内存,如果你不禁用它,在通过kubeadms初始化集群时,你会在飞行前检查中得到一个错误)。 <syntaxhighlight lang="bash"> swapoff -a </syntaxhighlight> 请记得在你的每个实例的/etc/fstab 文件中也注释/删除交换行,如:: <pre> #/dev/mapper/vg01-swap swap swap defaults 0 0 </pre> ==Kubernetes集群的初始化 == 以下操作只能在节点主实例上执行 通过以下命令启动Kubernetes集群的初始化,注意用主实例的IP地址修改"--apiserver-advertise-address="参数的值。 <syntaxhighlight lang="bash"> kubeadm init --apiserver-advertise-address=<ip de votre instance master> --pod-network-cidr=10.244.0.0/16 </syntaxhighlight> 注意:请不要修改"--pod-network-cidr="参数中表示的网络ip "10.244.0.0/16",因为这个参数允许我们表示我们将使用CNI Flannel插件来管理我们pod的网络部分。 下面是集群初始化成功后,这个命令的返回值应该是这样的。 <syntaxhighlight lang="bash"> [root@k8s-master ~]# kubeadm init --apiserver-advertise-address=10.1.1.16 --pod-network-cidr=10.244.0.0/16 [init] using Kubernetes version: v1.12.2 [preflight] running pre-flight checks [preflight/images] Pulling images required for setting up a Kubernetes cluster [preflight/images] This might take a minute or two, depending on the speed of your internet connection [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [preflight] Activating the kubelet service [certificates] Generated ca certificate and key. [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [k8s-master.cs437cloud.internal kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.1.1.16] [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [k8s-master.cs437cloud.internal localhost] and IPs [127.0.0.1 ::1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [k8s-master.cs437cloud.internal localhost] and IPs [10.1.1.16 127.0.0.1 ::1] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" [certificates] Generated sa key and public key. [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled [apiclient] All control plane components are healthy after 32.502898 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster [markmaster] Marking the node k8s-master.cs437cloud.internal as master by adding the label "node-role.kubernetes.io/master=''" [markmaster] Marking the node k8s-master.cs437cloud.internal as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master.cs437cloud.internal" as an annotation [bootstraptoken] using token: e83pes.u3igpccj2metetu8 [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 10.1.1.16:6443 --token e83pes.u3igpccj2metetu8 --discovery-token-ca-cert-hash sha256:7ea9169bc5ac77b3a2ec37e5129006d9a895ce040e306f3093ce77e7422f7f1c </syntaxhighlight> 我们执行所要求的操作,以最终完成我们集群的初始化。 我们在我们的用户(在我们的例子中是root)的目录下创建一个目录和配置文件。 <syntaxhighlight lang="bash"> mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config </syntaxhighlight> 我们为我们的集群部署了我们的pod Flannel网络。 <syntaxhighlight lang="bash"> kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds-amd64 created daemonset.extensions/kube-flannel-ds-arm64 created daemonset.extensions/kube-flannel-ds-arm created daemonset.extensions/kube-flannel-ds-ppc64le created daemonset.extensions/kube-flannel-ds-s390x created </syntaxhighlight> 注意:我们将保留由侧边初始化命令返回提供的最后一条命令("kubeadm join..."),以便以后在我们的工作实例上运行,将它们加入我们的集群。 现在我们可以从主实例对我们的集群进行第一次检查。 输入命令 "kubectl get nodes "来检查当前在你的集群中存在的节点。 <syntaxhighlight lang="bash"> [root@k8s-master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master.cs437cloud.internal Ready master 41m v1.12.2 </syntaxhighlight> 注意:目前只有你的主节点,这是正常的,因为我们还没有把其他节点添加到集群中。 输入命令 "kubectl get pods --all-namespaces "来检查当前存在于你的集群中的pods/containers。 <syntaxhighlight lang="bash"> [root@k8s-master ~]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-576cbf47c7-fwxj9 1/1 Running 0 41m kube-system coredns-576cbf47c7-t86s9 1/1 Running 0 41m kube-system etcd-k8s-master.cs437cloud.internal 1/1 Running 0 41m kube-system kube-apiserver-k8s-master.cs437cloud.internal 1/1 Running 0 41m kube-system kube-controller-manager-k8s-master.cs437cloud.internal 1/1 Running 0 41m kube-system kube-flannel-ds-amd64-wcm7v 1/1 Running 0 84s kube-system kube-proxy-h94bs 1/1 Running 0 41m kube-system kube-scheduler-k8s-master.cs437cloud.internal 1/1 Running 0 40m </syntaxhighlight> 注意:这里只有与我们的主节点所需的Kubernetes组件(kube-apiserver、etcd、kube-scheduler等)对应的pod。 我们可以用以下命令检查这些组件的状态。 <syntaxhighlight lang="bash"> [root@k8s-master ~]# kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health": "true"} </syntaxhighlight> ==将工作节点添加到集群中 == 只在工作实例/节点上执行的行动 在你的每个工作实例上(不要在你的主实例上这样做),运行上面集群初始化末尾提供的 "kubeadm join ... "命令。 <syntaxhighlight lang="bash"> [root@k8s-worker01 ~]# kubeadm join 10.1.1.16:6443 --token e83pes.u3igpccj2metetu8 --discovery-token-ca-cert-hash sha256:7ea9169bc5ac77b3a2ec37e5129006d9a895ce040e306f3093ce77e7422f7f1c [preflight] running pre-flight checks [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_sh ip_vs ip_vs_rr ip_vs_wrr] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}] you can solve this problem with following methods: 1. Run 'modprobe -- ' to load missing kernel modules; 2. Provide the missing builtin kernel ipvs support [discovery] Trying to connect to API Server "10.1.1.16:6443" [discovery] Created cluster-info discovery client, requesting info from "https://10.1.1.16:6443" [discovery] Requesting info from "https://10.1.1.16:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.1.1.16:6443" [discovery] Successfully established connection with API Server "10.1.1.16:6443" [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [preflight] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-worker01.cs437cloud.internal" as an annotation This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. </syntaxhighlight> <syntaxhighlight lang="bash"> [root@k8s-worker02 ~]# kubeadm join 10.1.1.16:6443 --token e83pes.u3igpccj2metetu8 --discovery-token-ca-cert-hash sha256:7ea9169bc5ac77b3a2ec37e5129006d9a895ce040e306f3093ce77e7422f7f1c [preflight] running pre-flight checks [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_wrr ip_vs_sh ip_vs ip_vs_rr] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}] you can solve this problem with following methods: 1. Run 'modprobe -- ' to load missing kernel modules; 2. Provide the missing builtin kernel ipvs support [discovery] Trying to connect to API Server "10.1.1.16:6443" [discovery] Created cluster-info discovery client, requesting info from "https://10.1.1.16:6443" [discovery] Requesting info from "https://10.1.1.16:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.1.1.16:6443" [discovery] Successfully established connection with API Server "10.1.1.16:6443" [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [preflight] Activating the kubelet service [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap... [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-worker02.cs437cloud.internal" as an annotation This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. </syntaxhighlight> ==检查集群的状态 == 要从主实例/节点执行的行动 通过重新执行 "kubectl get nodes "命令,检查你的工作节点是否已经被添加到集群中。 <syntaxhighlight lang="bash"> [root@k8s-master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master.cs437cloud.internal Ready master 46m v1.12.2 k8s-worker01.cs437cloud.internal Ready <none> 103s v1.12.2 k8s-worker02.cs437cloud.internal Ready <none> 48s v1.12.2 </syntaxhighlight> 备注:我们可以看到我们的两个工作节点(k8s-worker01和k8s-worker02),所以它们已经被添加到我们的集群中。 现在让我们再次运行 "kubectl get pods --all-namespaces "命令。 <syntaxhighlight lang="bash"> [root@k8s-master ~]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-576cbf47c7-fwxj9 1/1 Running 0 46m kube-system coredns-576cbf47c7-t86s9 1/1 Running 0 46m kube-system etcd-k8s-master.cs437cloud.internal 1/1 Running 0 46m kube-system kube-apiserver-k8s-master.cs437cloud.internal 1/1 Running 0 46m kube-system kube-controller-manager-k8s-master.cs437cloud.internal 1/1 Running 0 46m kube-system kube-flannel-ds-amd64-724nl 1/1 Running 0 2m6s kube-system kube-flannel-ds-amd64-wcm7v 1/1 Running 0 6m31s kube-system kube-flannel-ds-amd64-z7mwg 1/1 Running 3 70s kube-system kube-proxy-8r7wg 1/1 Running 0 2m6s kube-system kube-proxy-h94bs 1/1 Running 0 46m kube-system kube-proxy-m2f5r 1/1 Running 0 70s kube-system kube-scheduler-k8s-master.cs437cloud.internal 1/1 Running 0 46m </syntaxhighlight> 注意:你可以看到,有多少个 "kube-flannel "和 "kube-proxy "荚/容器,就有多少个我们的集群中的节点。 ==部署第一个吊舱 == 我们将部署我们的第一个 [https://kubernetes.io/docs/concepts/workloads/pods/pod/ 豆荚]在我们的Kubernetes集群中。 为了简单起见,我们选择部署一个名为 "nginx "的豆荚(没有复制),并使用 "nginx "镜像。 <syntaxhighlight lang="bash"> [root@k8s-master ~]# kubectl create deployment nginx --image=nginx deployment.apps/nginx created </syntaxhighlight> 如果我们检查一下,在列出我们集群的pods的命令的返回中,这个命令出现得很好。 <syntaxhighlight lang="bash"> [root@k8s-master ~]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default nginx-55bd7c9fd-5bghl 1/1 Running 0 104s kube-system coredns-576cbf47c7-fwxj9 1/1 Running 0 57m kube-system coredns-576cbf47c7-t86s9 1/1 Running 0 57m kube-system etcd-k8s-master.cs437cloud.internal 1/1 Running 0 57m kube-system kube-apiserver-k8s-master.cs437cloud.internal 1/1 Running 0 57m kube-system kube-controller-manager-k8s-master.cs437cloud.internal 1/1 Running 0 57m kube-system kube-flannel-ds-amd64-724nl 1/1 Running 0 13m kube-system kube-flannel-ds-amd64-wcm7v 1/1 Running 0 17m kube-system kube-flannel-ds-amd64-z7mwg 1/1 Running 3 12m kube-system kube-proxy-8r7wg 1/1 Running 0 13m kube-system kube-proxy-h94bs 1/1 Running 0 57m kube-system kube-proxy-m2f5r 1/1 Running 0 12m kube-system kube-scheduler-k8s-master.cs437cloud.internal 1/1 Running 0 57m </syntaxhighlight> 它出现在列表的顶部,与 "kube-system "的命名空间不同,因为它不是Kubernetes运行的特定组件。 也可以通过不使用"--all-namespace "参数执行相同的命令来避免显示特定于kube-system命名空间的pods。 <syntaxhighlight lang="bash"> [root@k8s-master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE nginx-55bd7c9fd-vs4fq 1/1 Running 0 3d2h </syntaxhighlight> 要显示标签 : <syntaxhighlight lang="bash"> [root@k8s-master ~]# kubectl get pods --show-labels NAME READY STATUS RESTARTS AGE LABELS nginx-55bd7c9fd-ckltn 1/1 Running 0 8m2s app=nginx,pod-template-hash=55bd7c9fd </syntaxhighlight> 我们还可以用以下命令检查我们的部署。 <syntaxhighlight lang="bash"> [root@k8s-master ~]# kubectl get deployments NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx 1 1 1 1 93m </syntaxhighlight> 所以,我们已经部署并启动了一个nginx pod,但还不能从外部访问。为了使它能够被外部访问,我们需要通过以下命令创建服务(NodePort类型)来公开我们的pod的端口。 <syntaxhighlight lang="bash"> [root@k8s-master ~]# kubectl create service nodeport nginx --tcp=80:80 service/nginx created </syntaxhighlight> 我们的服务就这样产生了。 <syntaxhighlight lang="bash"> [root@k8s-master ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 147m nginx NodePort 10.108.251.178 <none> 80:30566/TCP 20s </syntaxhighlight> 注意:它通过端口80/tcp进行监听,并从外部通过端口30566/tcp进行访问/曝光。 我们可以通过以下命令获得我们的pod的flannel ip和它当前运行的节点的名称。 <syntaxhighlight lang="bash"> [root@k8s-master ~]# kubectl get pods --selector="app=nginx" --output=wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE nginx-55bd7c9fd-vs4fq 1/1 Running 0 174m 10.244.2.2 k8s-worker02.cs437cloud.internal <none> </syntaxhighlight> 这里我们的nginx pod的ip是10.244.2.2,运行在我们的节点k8s-worker02上。 你也可以简单地通过以下命令在我们的nginx pod上运行一个命令或打开一个shell(与docker命令非常相似)。 <syntaxhighlight lang="bash"> [root@k8s-master ~]# kubectl exec -it nginx-55bd7c9fd-vs4fq -- /bin/bash root@nginx-55bd7c9fd-vs4fq:/# </syntaxhighlight> 你所要做的就是在你的Ikoula One Cloud网络上创建你的负载平衡规则,访问/公开你的网络服务器(nginx pod)。 - 连接到 [https://cloudstack.ikoula.com/client 云伊库拉一号] - 进入左侧垂直菜单中的 "网络"。 - 点击你部署Kubernetes实例的网络,然后点击 "查看IP地址",点击你的NAT源IP,然后进入 "配置 "标签。 - 点击 "负载平衡 "并创建你的规则,指定一个名称,公共端口 "80 "在我们的例子中,私人端口 "30566 "在我们的例子中(见上文),通过选择一个LB算法(例如轮流),如.NET。 [[File:faq_k8s_regle_lb-01.png|Kubernetes实例]] - 勾选你所有的工作者实例。 [[File:faq_k8s_regle_lb-02.png|检查你的kubernetes工作者实例]] 测试从浏览器访问你的Web服务器/nginx pod(通过你创建LB规则的网络公共IP)。 [[File:faq_k8s_browser_nginx.png|访问你的网络服务器]] 事实上,你的nginx pod可以从你的任何一个节点访问,这是由 "kube-proxy "组件实现的,它负责将连接指向它所运行的节点(在复制的情况下)。 因此,你刚刚部署了一个由3个节点组成的基本Kubernetes集群,其中有一个主节点和两个工作者。 ==更进一步 == 你可以通过部署Kubernetes仪表板或为你的pod创建持久化卷,通过增加你的工作节点的数量,甚至通过冗余分配主控角色以实现高可用性,或通过将节点专用于某些组件,例如Etcd,来进一步。 这里有一些有用的链接。 https://kubernetes.io/docs/reference/kubectl/cheatsheet/ https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/ https://kubernetes.io/docs/concepts/storage/volumes/ https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/ https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/ https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/ [[Category:云堆]] [[Category:公共云]] [[Category:私有云]] [[Category:码头]] [[Category:CoreOS]] [[Category:云]]
返回至
部署一个Kubernetes集群
。
导航菜单
个人工具
登录
名字空间
页面
讨论
变种
视图
阅读
查看源代码
查看历史
更多
搜索
导航
首页
最近更改
随机页面
MediaWiki帮助
工具
链入页面
相关更改
特殊页面
页面信息
投稿
fr-wiki.ikoula.com
en-wiki.ikoula.com
es-wiki.ikoula.com
it-wiki.ikoula.com
nl-wiki.ikoula.com
de-wiki.ikoula.com
pt-wiki.ikoula.com
ru-wiki.ikoula.com
pl-wiki.ikoula.com
ro-wiki.ikoula.com
ja-wiki.ikoula.com
zh-wiki.ikoula.com
he-wiki.ikoula.com
ar-wiki.ikoula.com