Kubic:升级 kubeadm 集群

跳转到:导航搜索

本页说明如何从版本 1.21.x 升级到版本 1.22.x 使用 kubeadm 创建的 Kubic Kubernetes 集群。

此过程基于 上游文档,但已根据 openSUSE Tumbleweed 和 Kubic 中独有的特定功能进行了修改

在开始之前

  • 您需要运行版本 1.21.0 或更高版本的 kubeadm Kubernetes 集群。
  • 必须禁用交换空间。
  • 集群应使用静态控制平面和 etcd Pod 或外部 etcd。
  • 请务必仔细阅读 发行说明
  • 请务必备份任何重要组件,例如存储在数据库中的应用程序级状态。kubeadm 升级不会触及您的工作负载,仅触及 Kubernetes 内部的组件,但备份始终是最佳实践。

附加信息

  • 升级后会重新启动所有容器,因为容器规范哈希值已更改。
  • 您只能从一个次要版本升级到下一个次要版本,或在同一次要版本的补丁版本之间升级。也就是说,您不能在升级时跳过次要版本。例如,您可以从 1.y 升级到 1.y+1,但不能从 1.y 升级到 1.y+2。

升级控制平面(又称 master)节点

升级第一个控制平面节点

任何运行 1.21.x 且 Kubic 快照版本为 20210901 或更高版本的 Kubic 控制平面节点已经安装了 kubeadm 1.22.x

1. 验证 kubeadm 是否为 1.22.1 或更高版本

kubeadm version

2. 释放控制平面节点

# replace <cp-node-name> with the name of your control plane node
kubectl drain <cp-node-name> --ignore-daemonsets

3. 在控制平面节点上运行

sudo kubeadm upgrade plan

您应该看到类似以下的输出

[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.21.4
[upgrade/versions] kubeadm version: v1.22.1
[upgrade/versions] Latest stable version: v1.22.1
[upgrade/versions] Latest version in the v1.21 series: v1.21.4

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT             AVAILABLE
Kubelet     1 x v1.21.4   v1.22.1

Upgrade to the latest version in the v1.18 series:

COMPONENT            CURRENT   AVAILABLE
API Server           v1.21.4   v1.22.1
Controller Manager   v1.21.4   v1.22.1
Scheduler            v1.21.4   v1.22.1
Kube Proxy           v1.21.4   v1.22.1
CoreDNS              1.7.0     1.8.4
Etcd                 3.4.13-0     3.5.0

You can now apply the upgrade by executing the following command:

    kubeadm upgrade apply v1.22.1

_____________________________________________________________________

此命令检查您的集群是否可以升级,并获取您可以升级到的版本。

4. 选择要升级到的版本,并运行相应的命令。例如

# replace x with the patch version you picked for this upgrade
sudo kubeadm upgrade apply v1.22.x

您应该看到类似以下的输出

[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.22.1"
[upgrade/versions] Cluster version: v1.21.4
[upgrade/versions] kubeadm version: v1.22.1
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.22.1"...
Static pod: kube-apiserver-myhost hash: 2cc222e1a577b40a8c2832320db54b46
Static pod: kube-controller-manager-myhost hash: f7ce4bc35cb6e646161578ac69910f18
Static pod: kube-scheduler-myhost hash: e3025acd90e7465e66fa19c71b916366
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests308527012"
W0308 18:48:14.535122    3082 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-03-08-18-48-14/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-myhost hash: 2cc222e1a577b40a8c2832320db54b46
Static pod: kube-apiserver-myhost hash: 609429acb0d71dce6725836dd97d8bf4
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-03-08-18-48-14/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-myhost hash: f7ce4bc35cb6e646161578ac69910f18
Static pod: kube-controller-manager-myhost hash: c7a1232ba2c5dc15641c392662fe5156
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-03-08-18-48-14/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-myhost hash: e3025acd90e7465e66fa19c71b916366
Static pod: kube-scheduler-myhost hash: b1b721486ae0ac504c160dcdc457ab0d
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.22" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.22.1". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

5. 手动升级您的 CNI 提供程序插件。(此步骤在其他控制平面节点上不需要)

6. 取消释放控制平面节点

# replace <cp-node-name> with the name of your control plane node
kubectl uncordon <cp-node-name>

升级其他控制平面节点

如果您有其他 master 节点,请按照上述相同的步骤操作,但无需使用 kubeadm upgrade plan 而是运行 kubeadm upgrade node 而不是 kubeadm upgrade apply

升级 kubelet

现在在每个控制平面节点上,配置 kubelet 以使用新版本,然后重新启动 kubelet

sed -i 's/KUBELET_VER=1.21/KUBELET_VER=1.22/' /etc/sysconfig/kubelet
systemctl restart kubelet

升级工作节点

工作节点上的升级过程应逐个节点或一次执行几个节点,而不会影响运行工作负载所需的最低容量。

kubeadm

任何运行 Kubic 快照版本 20210901 或更高版本的 Kubic 节点已经安装了 kubeadm 1.22.x

kubeadm version

释放节点

# replace <node-name> with the name of your worker node
kubectl drain <node-name> --ignore-daemonsets

升级 kubelet 配置

kubeadm upgrade node

运行新的 kubelet

sed -i 's/KUBELET_VER=1.21/KUBELET_VER=1.22/' /etc/sysconfig/kubelet
systemctl restart kubelet

取消释放节点

# replace <node-name> with the name of your node
kubectl uncordon <node-name>

验证集群状态

在所有节点上升级 kubelet 后,通过从任何可以访问集群的 kubectl 运行以下命令来验证所有节点是否再次可用

kubectl get nodes

STATUS 列应显示所有节点的 Ready,并且版本号应已更新。

从失败状态恢复

如果 kubeadm upgrade 失败且未回滚,例如由于执行期间意外关闭,您可以再次运行 kubeadm upgrade。此命令是幂等的,并最终确保实际状态是您声明的期望状态。

要从不良状态恢复,您还可以运行 kubeadm upgrade apply --force 而不更改集群运行的版本。

在升级期间,kubeadm 会在 /etc/kubernetes/tmp 下写入以下备份文件夹

  • kubeadm-backup-etcd-<date>-
  • kubeadm-backup-manifests-<date>-

kubeadm-backup-etcd 包含此控制平面节点本地 etcd 成员数据的备份。如果 etcd 升级失败,并且自动回滚不起作用,则可以手动将此文件夹的内容恢复到 /var/lib/etcd。如果使用外部 etcd,则此备份文件夹将为空。

kubeadm-backup-manifests 包含此控制平面节点的静态 Pod 清单文件的备份。如果升级失败并且自动回滚不起作用,则可以将此文件夹的内容手动恢复到 /etc/kubernetes/manifests。如果出于某种原因,某个组件的升级前和升级后清单文件之间没有差异,则不会为此组件写入备份文件。