草庐IT

使用kubeadm部署 Kubernetes1.24(k8s)

无味wy 2023-10-19 原文

一、准备
1.1、系统配置
在安装之前,需要先做好如下准备。3 台 CentOS 7.x 主机如下:
node1节点配置最少cpu2个、内存2G,不然初始化的预检查无法通过

cat /etc/hosts
10.0.0.22    node1
10.0.0.23    node2
10.0.0.24    node3

在各个主机上完成下面的系统配置。如果各个主机启用了防火墙策略,需要开放 Kubernetes 各个组件所需要的端口,可以查看 Ports and Protocols 中的内容, 开放相关端口或者关闭主机的防火墙。
禁用SELINUX:

setenforce 0

vi /etc/selinux/config
SELINUX=disabled

创建 /etc/modules-load.d/containerd.conf 配置文件:

cat << EOF > /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

执行以下命令使配置生效:

modprobe overlay
modprobe br_netfilter

创建 /etc/sysctl.d/99-kubernetes-cri.conf 配置文件:

cat << EOF > /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
user.max_user_namespaces=28633
EOF

执行以下命令使配置生效:

sysctl -p /etc/sysctl.d/99-kubernetes-cri.conf

1.2、配置服务器支持开启ipvs的前提条件
由于 ipvs 已经加入到了内核的主干,所以为 kube-proxy 开启 ipvs 的前提需要加载以下的内核模块:

ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4

在各个服务器节点上执行以下脚本:

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

上面脚本创建了的 /etc/sysconfig/modules/ipvs.modules 文件,保证在节点重启后能自动加载所需模块。 使用 lsmod | grep -e ip_vs -e nf_conntrack_ipv4 命令查看是否已经正确加载所需的内核模块。接下来还需要确保各个节点上已经安装了 ipset 软件包,为了便于查看 ipvs 的代理规则,最好安装一下管理工具 ipvsadm

yum install -y ipset ipvsadm

如果不满足以上前提条件,则即使 kube-proxy 的配置开启了 ipvs 模式,也会退回到 iptables 模式

1.3、部署容器运行时 Containerd
在各个服务器节点上安装容器运行时 Containerd。下载Containerd的二进制包:

wget https://github.com/containerd/containerd/releases/download/v1.6.4/cri-containerd-cni-1.6.4-linux-amd64.tar.gz

cri-containerd-cni-1.6.4-linux-amd64.tar.gz 压缩包中已经按照官方二进制部署推荐的目录结构布局好。 里面包含了 systemd 配置文件,containerd 以及 cni 的部署文件。 将解压缩到系统的根目录 / 中:

tar -zxvf cri-containerd-cni-1.6.4-linux-amd64.tar.gz -C /
 
etc/
etc/systemd/
etc/systemd/system/
etc/systemd/system/containerd.service
etc/crictl.yaml
etc/cni/
etc/cni/net.d/
etc/cni/net.d/10-containerd-net.conflist
usr/
usr/local/
usr/local/sbin/
usr/local/sbin/runc
usr/local/bin/
usr/local/bin/critest
usr/local/bin/containerd-shim
usr/local/bin/containerd-shim-runc-v1
usr/local/bin/ctd-decoder
usr/local/bin/containerd
usr/local/bin/containerd-shim-runc-v2
usr/local/bin/containerd-stress
usr/local/bin/ctr
usr/local/bin/crictl
......
opt/cni/
opt/cni/bin/
opt/cni/bin/bridge
......

注意经测试 cri-containerd-cni-1.6.4-linux-amd64.tar.gz 包中包含的 runc 在 CentOS 7 下的动态链接有问题,这里从 runc 的 github 上单独下载 runc,并替换上面安装的 containerd 中的 runc:

wget https://github.com/opencontainers/runc/releases/download/v1.1.2/runc.amd64

在下面目录中替换

/usr/local/sbin/

接下来生成 containerd 的配置文件:

mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml

根据文档 Container runtimes 中的内容,对于使用 systemd 作为 init system 的 Linux 的发行版,使用 systemd 作为容器的 cgroup driver 可以确保服务器节点在资源紧张的情况更加稳定,因此这里配置各个节点上 containerd 的 cgroup driver 为 systemd。修改前面生成的配置文件 /etc/containerd/config.toml:

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  ...
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true

再修改 /etc/containerd/config.toml 中的

[plugins."io.containerd.grpc.v1.cri"]
  ...
  # sandbox_image = "k8s.gcr.io/pause:3.6"
  sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.7"

配置 containerd 开机启动,并启动 containerd

systemctl enable containerd --now

使用 crictl 测试一下,确保可以打印出版本信息并且没有错误信息输出:

crictl version
Version:  0.1.0
RuntimeName:  containerd
RuntimeVersion:  v1.6.4
RuntimeApiVersion:  v1alpha2

二、使用 kubeadm 部署 Kubernetes
2.1、安装 kubeadm 和 kubelet
下面在各节点安装 kubeadm 和 kubelet:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum makecache fast
yum install kubelet kubeadm kubectl

关闭Swap

swapoff -a

#修改 /etc/fstab 文件,注释掉 SWAP 的自动挂载,使用 free -m 确认 swap 已经关闭。
swappiness 参数调整,修改 /etc/sysctl.d/99-kubernetes-cri.conf 添加下面一行:
vm.swappiness=0

执行 sysctl -p /etc/sysctl.d/99-kubernetes-cri.conf 使修改生效。
2.2、使用 kubeadm init 初始化集群
在各节点开机启动 kubelet 服务:

systemctl enable kubelet.service

使用 kubeadm config print init-defaults --component-configs KubeletConfiguration 可以打印集群初始化默认的使用的配置:

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 1.2.3.4
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: node
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: 1.24.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 0s
    cacheUnauthorizedTTL: 0s
cgroupDriver: systemd
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
logging:
  flushFrequency: 0
  options:
    json:
      infoBufferSize: "0"
  verbosity: 0
memorySwap: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
rotateCertificates: true
runtimeRequestTimeout: 0s
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s

从默认的配置中可以看到,可以使用 imageRepository 定制在集群初始化时拉取 k8s 所需镜像的地址。基于默认配置定制出本次使用 kubeadm 初始化集群所需的配置文件 kubeadm.yaml:

apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 10.0.0.22
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock
  taints:
  - effect: PreferNoSchedule
    key: node-role.kubernetes.io/master
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.24.0
imageRepository: registry.aliyuncs.com/google_containers
networking:
  podSubnet: 10.244.0.0/16
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
failSwapOn: false
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs

这里定制了 imageRepository 为阿里云的 registry,避免因 gcr 被墙,无法直接拉取镜像。criSocket 设置了容器运行时为 containerd。 同时设置 kubelet 的 cgroupDriver 为 systemd,设置 kube-proxy 代理模式为 ipvs。在开始初始化集群之前可以使用 kubeadm config images pull --config kubeadm.yaml 预先在各个服务器节点上拉取所 k8s 需要的容器镜像。

kubeadm config images pull --config kubeadm.yaml
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.24.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.24.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.24.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.24.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.7
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.3-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.8.6

接下来使用 kubeadm 初始化集群,选择 node1 作为 Master Node,在 node1 上执行下面的命令:

kubeadm init --config=kubeadm.yaml
W0309 18:27:43.224566    6714 common.go:83] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta2". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
[init] Using Kubernetes version: v1.24.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local node1] and IPs [10.96.0.1 10.0.0.22]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost node1] and IPs [10.0.0.22 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost node1] and IPs [10.0.0.22 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 8.008173 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node node1 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node node1 as control-plane by adding the taints [node-role.kubernetes.io/master:PreferNoSchedule]
[bootstrap-token] Using token: w1hc2o.zhyrqjkvcxdrl79s
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.0.22:6443 --token w1hc2o.zhyrqjkvcxdrl79s \
    --discovery-token-ca-cert-hash sha256:2a05dccae72b7cda8846cb049405f4a0b52397a887a9733ac6d11cb9c638a313 

上面记录了完成的初始化输出的内容,根据输出的内容基本上可以看出手动初始化安装一个 Kubernetes 集群所需要的关键步骤。 其中有以下关键内容:

[certs]生成相关的各种证书
[kubeconfig]生成相关的kubeconfig文件
[kubelet-start] 生成kubelet的配置文件"/var/lib/kubelet/config.yaml"
[control-plane]使用/etc/kubernetes/manifests目录中的yaml文件创建apiserver、controller-manager、scheduler的静态pod
[bootstraptoken]生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到
下面的命令是配置常规用户如何使用kubectl访问集群:mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
最后给出了将节点加入集群的命令
10.0.0.22:6443 --token w1hc2o.zhyrqjkvcxdrl79s \
    --discovery-token-ca-cert-hash sha256:2a05dccae72b7cda8846cb049405f4a0b52397a887a9733ac6d11cb9c638a313

配置环境变量

#普通用户
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

#root用户
# 临时生效(退出当前窗口重连环境变量失效)
export KUBECONFIG=/etc/kubernetes/admin.conf
# 永久生效(推荐)
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source  ~/.bash_profile

查看node节点:

kubectl get nodes
NAME    STATUS   ROLES           AGE   VERSION
node1   Ready    control-plane   12m   v1.24.0

查看pod状态

kubectl get pods -A
NAMESPACE     NAME                            READY   STATUS    RESTARTS   AGE
kube-system   coredns-74586cf9b6-ffm8l        1/1     Running   0          23m
kube-system   coredns-74586cf9b6-j49nz        1/1     Running   0          23m
kube-system   etcd-node1                      1/1     Running   0          23m
kube-system   kube-apiserver-node1            1/1     Running   0          23m
kube-system   kube-controller-manager-node1   1/1     Running   0          23m
kube-system   kube-proxy-mnmnt                1/1     Running   0          23m
kube-system   kube-scheduler-node1            1/1     Running   0          23m

node 节点加入 k8s 集群

#node2和node3执行
kubeadm join 10.0.0.22:6443 --token w1hc2o.zhyrqjkvcxdrl79s \
--discovery-token-ca-cert-hash sha256:2a05dccae72b7cda8846cb049405f4a0b52397a887a9733ac6d11cb9c638a313

[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...


This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

查看节点状态

kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
controller-manager   Healthy   ok                              
scheduler            Healthy   ok                              
etcd-0               Healthy   {"health":"true","reason":""}   

kubectl get nodes
NAME    STATUS   ROLES           AGE   VERSION
node1   Ready    control-plane   36m   v1.24.0
node2   Ready    <none>          18s   v1.24.2
node3   Ready    <none>          25s   v1.24.2

安装包管理器 helm 3
Helm 是 Kubernetes 的包管理器,后续流程也将使用 Helm 安装 Kubernetes 的常用组件。 这里先在 master 节点 node1 上安装 helm

wget https://get.helm.sh/helm-v3.9.0-linux-amd64.tar.gz
tar -zxvf helm-v3.9.0-linux-amd64.tar.gz
mv linux-amd64/helm  /usr/local/bin/
#执行 helm list 确认没有错误输出
[root@node1 1.8+]# helm list
NAME    NAMESPACE   REVISION    UPDATED STATUS  CHART   APP VERSION

部署 Pod Network 组件 Calico
选择 calico 作为 k8s 的 Pod 网络组件,下面使用 helm 在 k8s 集群中安装 calico。下载 tigera-operator 的 helm chart:

wget https://github.com/projectcalico/calico/releases/download/v3.23.1/tigera-operator-v3.23.1.tgz

查看这个 chart 的中可定制的配置:

helm show values tigera-operator-v3.23.1.tgz
 
imagePullSecrets: {}
 
installation:
  enabled: true
  kubernetesProvider: ""
 
apiServer:
  enabled: true
 
certs:
  node:
    key:
    cert:
    commonName:
  typha:
    key:
    cert:
    commonName:
    caBundle:
 
resources: {}
 
# Configuration for the tigera operator
tigeraOperator:
  image: tigera/operator
  version: v1.27.1
  registry: quay.io
calicoctl:
  image: docker.io/calico/ctl
  tag: v3.23.1
# 可针对上面的配置进行定制,例如 calico 的镜像改成从私有库拉取。
# 这里只是个人本地环境测试 k8s 新版本,因此保留 value.yaml 为空即可

使用 helm 安装 calico:

helm install calico tigera-operator-v3.23.1.tgz -n kube-system  --create-namespace -f values.yaml

等待并确认所有 pod 处于 Running状态:

[root@node1 1.8+]# kubectl get pod -n kube-system | grep tigera-operator
tigera-operator-5fb55776df-zlpsj   1/1     Running   6 (73m ago)   4d13h
[root@node1 1.8+]# 
[root@node1 1.8+]# kubectl get pods -n calico-system
NAME                                       READY   STATUS    RESTARTS      AGE
calico-kube-controllers-68884f975d-vw9qp   1/1     Running   2 (72m ago)   4d13h
calico-node-2gls2                          1/1     Running   2 (73m ago)   4d13h
calico-node-6smc9                          1/1     Running   1 (73m ago)   4d13h
calico-node-v5xtt                          1/1     Running   1 (73m ago)   4d13h
calico-typha-6df8cd9bc9-t5qjl              1/1     Running   1 (73m ago)   4d13h
calico-typha-6df8cd9bc9-vxg46              1/1     Running   1 (73m ago)   4d13h

查看一下 calico 向 k8s 中添加的 api 资源:

[root@node1 1.8+]# kubectl api-resources | grep calico
bgpconfigurations                                                                 crd.projectcalico.org/v1               false        BGPConfiguration
bgppeers                                                                          crd.projectcalico.org/v1               false        BGPPeer
blockaffinities                                                                   crd.projectcalico.org/v1               false        BlockAffinity
caliconodestatuses                                                                crd.projectcalico.org/v1               false        CalicoNodeStatus
clusterinformations                                                               crd.projectcalico.org/v1               false        ClusterInformation
felixconfigurations                                                               crd.projectcalico.org/v1               false        FelixConfiguration
globalnetworkpolicies                                                             crd.projectcalico.org/v1               false        GlobalNetworkPolicy
globalnetworksets                                                                 crd.projectcalico.org/v1               false        GlobalNetworkSet
hostendpoints                                                                     crd.projectcalico.org/v1               false        HostEndpoint
ipamblocks                                                                        crd.projectcalico.org/v1               false        IPAMBlock
ipamconfigs                                                                       crd.projectcalico.org/v1               false        IPAMConfig
ipamhandles                                                                       crd.projectcalico.org/v1               false        IPAMHandle
ippools                                                                           crd.projectcalico.org/v1               false        IPPool
ipreservations                                                                    crd.projectcalico.org/v1               false        IPReservation
kubecontrollersconfigurations                                                     crd.projectcalico.org/v1               false        KubeControllersConfiguration
networkpolicies                                                                   crd.projectcalico.org/v1               true         NetworkPolicy
networksets                                                                       crd.projectcalico.org/v1               true         NetworkSet
bgpconfigurations                 bgpconfig,bgpconfigs                            projectcalico.org/v3                   false        BGPConfiguration
bgppeers                                                                          projectcalico.org/v3                   false        BGPPeer
caliconodestatuses                caliconodestatus                                projectcalico.org/v3                   false        CalicoNodeStatus
clusterinformations               clusterinfo                                     projectcalico.org/v3                   false        ClusterInformation
felixconfigurations               felixconfig,felixconfigs                        projectcalico.org/v3                   false        FelixConfiguration
globalnetworkpolicies             gnp,cgnp,calicoglobalnetworkpolicies            projectcalico.org/v3                   false        GlobalNetworkPolicy
globalnetworksets                                                                 projectcalico.org/v3                   false        GlobalNetworkSet
hostendpoints                     hep,heps                                        projectcalico.org/v3                   false        HostEndpoint
ippools                                                                           projectcalico.org/v3                   false        IPPool
ipreservations                                                                    projectcalico.org/v3                   false        IPReservation
kubecontrollersconfigurations                                                     projectcalico.org/v3                   false        KubeControllersConfiguration
networkpolicies                   cnp,caliconetworkpolicy,caliconetworkpolicies   projectcalico.org/v3                   true         NetworkPolicy
networksets                       netsets                                         projectcalico.org/v3                   true         NetworkSet
profiles                                                                          projectcalico.org/v3                   false        Profile

这些 api 资源是属于 calico 的,因此不建议使用 kubectl 来管理,推荐按照 calicoctl 来管理这些 api 资源。 将 calicoctl 安装为 kubectl 的插件:

cd /usr/local/bin
curl -o kubectl-calico -O -L  "https://github.com/projectcalico/calicoctl/releases/download/v3.21.5/calicoctl-linux-amd64" 
chmod +x kubectl-calico

验证插件正常工作:

kubectl calico -h

验证 k8s DNS 是否可用

kubectl run curl --image=radial/busyboxplus:curl -it
If you don't see a command prompt, try pressing enter.
[ root@curl:/ ]$

进入后执行 nslookup kubernetes.default 确认解析正常:

[ root@curl:/ ]$ nslookup kubernetes.default
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
 
Name:      kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

有关使用kubeadm部署 Kubernetes1.24(k8s)的更多相关文章

  1. ruby - 如何使用 Nokogiri 的 xpath 和 at_xpath 方法 - 2

    我正在学习如何使用Nokogiri,根据这段代码我遇到了一些问题:require'rubygems'require'mechanize'post_agent=WWW::Mechanize.newpost_page=post_agent.get('http://www.vbulletin.org/forum/showthread.php?t=230708')puts"\nabsolutepathwithtbodygivesnil"putspost_page.parser.xpath('/html/body/div/div/div/div/div/table/tbody/tr/td/div

  2. ruby - 使用 RubyZip 生成 ZIP 文件时设置压缩级别 - 2

    我有一个Ruby程序,它使用rubyzip压缩XML文件的目录树。gem。我的问题是文件开始变得很重,我想提高压缩级别,因为压缩时间不是问题。我在rubyzipdocumentation中找不到一种为创建的ZIP文件指定压缩级别的方法。有人知道如何更改此设置吗?是否有另一个允许指定压缩级别的Ruby库? 最佳答案 这是我通过查看ruby​​zip内部创建的代码。level=Zlib::BEST_COMPRESSIONZip::ZipOutputStream.open(zip_file)do|zip|Dir.glob("**/*")d

  3. ruby - 为什么我可以在 Ruby 中使用 Object#send 访问私有(private)/ protected 方法? - 2

    类classAprivatedeffooputs:fooendpublicdefbarputs:barendprivatedefzimputs:zimendprotecteddefdibputs:dibendendA的实例a=A.new测试a.foorescueputs:faila.barrescueputs:faila.zimrescueputs:faila.dibrescueputs:faila.gazrescueputs:fail测试输出failbarfailfailfail.发送测试[:foo,:bar,:zim,:dib,:gaz].each{|m|a.send(m)resc

  4. ruby-on-rails - 使用 Ruby on Rails 进行自动化测试 - 最佳实践 - 2

    很好奇,就使用ruby​​onrails自动化单元测试而言,你们正在做什么?您是否创建了一个脚本来在cron中运行rake作业并将结果邮寄给您?git中的预提交Hook?只是手动调用?我完全理解测试,但想知道在错误发生之前捕获错误的最佳实践是什么。让我们理所当然地认为测试本身是完美无缺的,并且可以正常工作。下一步是什么以确保他们在正确的时间将可能有害的结果传达给您? 最佳答案 不确定您到底想听什么,但是有几个级别的自动代码库控制:在处理某项功能时,您可以使用类似autotest的内容获得关于哪些有效,哪些无效的即时反馈。要确保您的提

  5. ruby - 在 Ruby 中使用匿名模块 - 2

    假设我做了一个模块如下:m=Module.newdoclassCendend三个问题:除了对m的引用之外,还有什么方法可以访问C和m中的其他内容?我可以在创建匿名模块后为其命名吗(就像我输入“module...”一样)?如何在使用完匿名模块后将其删除,使其定义的常量不再存在? 最佳答案 三个答案:是的,使用ObjectSpace.此代码使c引用你的类(class)C不引用m:c=nilObjectSpace.each_object{|obj|c=objif(Class===objandobj.name=~/::C$/)}当然这取决于

  6. ruby - 使用 ruby​​ 和 savon 的 SOAP 服务 - 2

    我正在尝试使用ruby​​和Savon来使用网络服务。测试服务为http://www.webservicex.net/WS/WSDetails.aspx?WSID=9&CATID=2require'rubygems'require'savon'client=Savon::Client.new"http://www.webservicex.net/stockquote.asmx?WSDL"client.get_quotedo|soap|soap.body={:symbol=>"AAPL"}end返回SOAP异常。检查soap信封,在我看来soap请求没有正确的命名空间。任何人都可以建议我

  7. python - 如何使用 Ruby 或 Python 创建一系列高音调和低音调的蜂鸣声? - 2

    关闭。这个问题是opinion-based.它目前不接受答案。想要改进这个问题?更新问题,以便editingthispost可以用事实和引用来回答它.关闭4年前。Improvethisquestion我想在固定时间创建一系列低音和高音调的哔哔声。例如:在150毫秒时发出高音调的蜂鸣声在151毫秒时发出低音调的蜂鸣声200毫秒时发出低音调的蜂鸣声250毫秒的高音调蜂鸣声有没有办法在Ruby或Python中做到这一点?我真的不在乎输出编码是什么(.wav、.mp3、.ogg等等),但我确实想创建一个输出文件。

  8. ruby-on-rails - 'compass watch' 是如何工作的/它是如何与 rails 一起使用的 - 2

    我在我的项目目录中完成了compasscreate.和compassinitrails。几个问题:我已将我的.sass文件放在public/stylesheets中。这是放置它们的正确位置吗?当我运行compasswatch时,它不会自动编译这些.sass文件。我必须手动指定文件:compasswatchpublic/stylesheets/myfile.sass等。如何让它自动运行?文件ie.css、print.css和screen.css已放在stylesheets/compiled。如何在编译后不让它们重新出现的情况下删除它们?我自己编译的.sass文件编译成compiled/t

  9. ruby - 使用 ruby​​ 将 HTML 转换为纯文本并维护结构/格式 - 2

    我想将html转换为纯文本。不过,我不想只删除标签,我想智能地保留尽可能多的格式。为插入换行符标签,检测段落并格式化它们等。输入非常简单,通常是格式良好的html(不是整个文档,只是一堆内容,通常没有anchor或图像)。我可以将几个正则表达式放在一起,让我达到80%,但我认为可能有一些现有的解决方案更智能。 最佳答案 首先,不要尝试为此使用正则表达式。很有可能你会想出一个脆弱/脆弱的解决方案,它会随着HTML的变化而崩溃,或者很难管理和维护。您可以使用Nokogiri快速解析HTML并提取文本:require'nokogiri'h

  10. ruby - 在 64 位 Snow Leopard 上使用 rvm、postgres 9.0、ruby 1.9.2-p136 安装 pg gem 时出现问题 - 2

    我想为Heroku构建一个Rails3应用程序。他们使用Postgres作为他们的数据库,所以我通过MacPorts安装了postgres9.0。现在我需要一个postgresgem并且共识是出于性能原因你想要pggem。但是我对我得到的错误感到非常困惑当我尝试在rvm下通过geminstall安装pg时。我已经非常明确地指定了所有postgres目录的位置可以找到但仍然无法完成安装:$envARCHFLAGS='-archx86_64'geminstallpg--\--with-pg-config=/opt/local/var/db/postgresql90/defaultdb/po

随机推荐