草庐IT

Docker Swarm集群与Kubernetes的搭建与试用

罗比不纠结 2023-05-20 原文

一、Docker Swarm集群的环境搭建与试用

Docker Swarm 搭建

1. OS设置

Step 1

关闭SELinux,firewalld

Step 2

网络设置

Step 3 

[root@vm1 ~]# ip -br a | grep 0s8 | awk '{print $3}'

192.168.50.100/24

Step 4 

[root@vm2 ~]# ip -br a | grep 0s8 | awk '{print $3}'

192.168.50.120/24

2. 安装Docker

Step 1

[root@vm1 ~]# cat install-docker.sh

Step 2

yum remove docker* -y

Step 3 

rm -rf /var/lib/docker

Step 4 

yum -y install wget

Step 5

wget -O /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/centos/docker-ce.repo

Step 6

yum install docker-ce docker-ce-cli containerd.io -y

Step 7

docker –version

Step 8

systemctl enable docker –now

Step 9

docker run hello-world

Step 10

[root@vm1 ~]# bash install-docker.sh

Step 11

[root@vm1 ~]# curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

Step 12

[root@vm1 ~]# chmod +x /usr/local/bin/docker-compose

Step 13

[root@vm1 ~]# docker -v

Docker version 20.10.12, build e91ed57

Step 14

[root@vm1 ~]# docker-compose -v

docker-compose version 1.29.2, build 5becea4c

Step 15

[root@vm2 ~]# docker -v

Docker version 20.10.12, build e91ed57

Step 16

[root@vm2 ~]# docker-compose -v

docker-compose version 1.29.2, build 5becea4c

3. 设置Docker0网络

Step 1

[root@vm1 ~]# docker network inspect bridge -f "{{.IPAM.Config}}"

Step 2

[{192.168.80.0/24  192.168.80.1 map[]}]

Step 3 

[root@vm2 ~]# docker network inspect bridge -f "{{.IPAM.Config}}"

Step 4 

[{192.168.90.0/24  192.168.90.1 map[]}]

搭建Swarm集群

1. 初始化

Step 1

[root@vm1 ~]# docker swarm init --advertise-addr 192.168.50.100

Step 2

Swarm initialized: current node (kdcrkd6sqteevq9jgy70fd0h0) is now a manager.

Step 3 

To add a worker to this swarm, run the following command:

docker swarm join --token SWMTKN-1- 0h1lwim8eh8ygwmh9mg3chuep8lf3dh0z8iv79v3km7itl28ww-4yt8thj4v7a2sj45iab6v8qqj 192.168.50.100:2377

Step 4 

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

Step 5

[root@vm1 ~]# docker swarm join-token worker

Step 6

To add a worker to this swarm, run the following command:

docker swarm join --token SWMTKN-1-0h1lwim8eh8ygwmh9mg3chuep8lf3dh0z8iv79v3km7itl28ww-4yt8thj4v7a2sj45iab6v8qqj 192.168.50.100:2377

Step 7

[root@vm1 ~]# docker node ls

 ID                       HOSTNAME   STATUS   AVAILABILITY   MANAGER STATUS   ENGINE VERSION

kdcrkd6sqteevq9jgy70fd0h0*   vm1            Ready      Active             Leader                  20.10.12

2. 添加WorkerSwarm集群中

Step 1

[root@vm2 ~]# docker swarm join --token SWMTKN-1-0h1lwim8eh8ygwmh9mg3chuep8lf3dh0z8iv79v3km7itl28ww-4yt8thj4v7a2sj45iab6v8qqj 192.168.50.100:2377

This node joined a swarm as a worker.

查看集群节点

Step 1

[root@vm2 ~]# docker node ls

Error response from daemon: This node is not a swarm manager. Worker nodes can't be used to view or modify cluster state. Please run this command on a manager node or promote the current node to a manager.

Step 2

ID                                HOSTNAME    STATUS     AVAILABILITY    MANAGER STATUS    ENGINE VERSION

kdcrkd6sqteevq9jgy70fd0h0 *   vm1              Ready        Active                Leader                     20.10.12

4hh92oj2meotbi0etnje15bzq     vm2              Ready        Active                                        20.10.12

3. 添加Label

1)查询DNS

Step 1

[root@vm1 ~]# docker node update --label-add name=swarm-master-1 vm1

vml

Step 2

[root@vm1 ~]# docker node update --label-add name=swarm-master-2 vm2

vm2

Step 3 

[root@vm1 ~]# docker node ls

ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION

kdcrkd6sqteevq9jgy70fd0h0 *      vm1          Ready      Active           Leader                20.10.12

4hh92oj2meotbi0etnje15bzq       vm2          Ready      Active                                 20.10.12

2)查看Label

Step 1

[root@vm1 ~]# docker node inspect vm1 -f "{{.Spec.Labels}}"

map[name:swarm-master-1]

Step 2

[root@vm1 ~]# docker node inspect vm2 -f "{{.Spec.Labels}}"

map[HOSTNAME:master-2 name:master-2]

3)添加Label

Step 1

[root@vm1 ~]# docker node update --help

Usage:  docker node update [OPTIONS] NODE

Step 2

Update a node

Options:

      --availability string   Availability of the node ("active"|"pause"|"drain")

      --label-add list        Add or update a node label (key=value)

      --label-rm list         Remove a node label if exists

      --role string           Role of the node ("worker"|"manager")

Step 3 

[root@vm1 ~]# docker node update --label-add name=master-2 vm2

vm2

Step 4 

[root@vm1 ~]# echo $?

0

Step 5

[root@vm1 ~]#

Step 6

[root@vm1 ~]# docker node ls

ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION

kdcrkd6sqteevq9jgy70fd0h0 *      vm1          Ready      Active           Leader                20.10.12

4hh92oj2meotbi0etnje15bzq       vm2          Ready      Active                                 20.10.12

Step 7

[root@vm1 ~]# docker node promote master-2

Error: No such node: master-2

Step 8

[root@vm1 ~]# docker node update --label-add HOSTNAME=master-2 vm2

vm2

Step 9

[root@vm1 ~]# docker node ls

ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION

kdcrkd6sqteevq9jgy70fd0h0 *      vm1          Ready      Active           Leader               20.10.12

4hh92oj2meotbi0etnje15bzq       vm2          Ready      Active                                20.10.12

Step 10

[root@vm1 ~]# docker node promote master-2

Error: No such node: master-2

4. 提升WorkerMaster

Step 1

[root@vm1 ~]# docker node ls

ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION

kdcrkd6sqteevq9jgy70fd0h0 *     vm1           Ready      Active           Leader                20.10.12

4hh92oj2meotbi0etnje15bzq      vm2           Ready      Active           Reachable             20.10.12

Step 2

[root@vm2 ~]# docker node ls

ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION

kdcrkd6sqteevq9jgy70fd0h0       vm1          Ready       Active            Leader               20.10.12

4hh92oj2meotbi0etnje15bzq *     vm2          Ready       Active            Reachable             20.10.12

5. 查看节点信息

Step 1

[root@vm1 ~]# docker node inspect vm2 -f "{{.Spec.Labels}}"

map[HOSTNAME:master-2 name:master-2]

Step 2

[root@vm1 ~]# docker node inspect vm2

6. 创建网络

Step 1

[root@vm1 ~]# docker network create -d overlay --subnet=192.168.82.0/24 --gateway=192.168.82.1 --attachable swarm-net

xywzrf7ftwenaxbu0zmewh183

Step 2

[root@vm1 ~]# docker network inspect swarm-net -f "{{.IPAM}}"

{default map[] [{192.168.82.0/24  192.168.82.1 map[]}]}

7. 创建Service并验证

1)创建

Step 1

[root@vm1 ~]# docker service create --replicas 3 -p 10080:80 --network swarm-net --name nginx-cluster nginx

r4v6w094yxl370bynyzghh37a

overall progress: 3 out of 3 tasks

1/3: running   [==================================================>]

2/3: running   [==================================================>]

3/3: running   [==================================================>]

verify: Service converged

Step 2

[root@vm1 ~]#

2)查看

Step 1

[root@vm1 ~]# docker service ls

ID             NAME            MODE         REPLICAS   IMAGE          PORTS

r4v6w094yxl3    nginx-cluster      replicated       3/3          nginx:latest       *:10080->80/tcp

Step 2

[root@vm1 ~]# ss -ntl | grep 10080

LISTEN 0      128                *:10080            *:*

Step 3 

[root@vm1 ~]# docker ps

CONTAINER ID  IMAGE      COMMAND              CREATED         STATUS         PORTS     NAMES

3ddd0e479de6   nginx:latest   "/docker-entrypoint.…"      7 minutes ago        Up 7 minutes     80/tcp   

nginx-cluster.1.6i1ncpnncxzxmsckwyahj9f7l

Step 4 

[root@vm1 ~]# docker port nginx-cluster.1.6i1ncpnncxzxmsckwyahj9f7l

Step 5

[root@vm1 ~]# echo $?

0

Step 6

[root@vm1 ~]#

3)访问

Step 1

[root@vm1 ~]# curl 192.168.50.100:10080

<!DOCTYPE html>

<html>

<head>

<title>Welcome to nginx!</title>

...

8. 查看同一台主机的负载均衡分配情况

1)修改默认Web主页

Step 1

[root@vm1 ~]# docker exec -it nginx-cluster.1.6i1ncpnncxzxmsckwyahj9f7l bash

root@3ddd0e479de6:/# echo '#1 in master 1' > /usr/share/nginx/html/index.html

Step 2

[root@vm2 ~]# docker exec -it nginx-cluster.2.gp7szi6348r0gfakr6v1i42ga bash

root@0d6709372322:/# echo '#2 in master 2' > /usr/share/nginx/html/index.html

root@0d6709372322:/#

Step 3 

[root@vm2 ~]# docker exec -it nginx-cluster.3.yofvioldzci3k4geve7lykyrs bash

root@6b1e246bdc34:/#  echo '#3 in master 2' > /usr/share/nginx/html/index.html

2)访问测试

Step 1

[root@vm2 ~]# curl 192.168.50.120:10080

#3 in master 2

Step 2

[root@vm2 ~]# curl 192.168.50.120:10080

#1 in master 1

Step 3 

[root@vm2 ~]# curl 192.168.50.120:10080

#2 in master 2

Step 4 

[root@vm2 ~]# curl 192.168.50.120:10080

#3 in master 2

9. 验证HA

Step 1

[root@vm2 ~]# docker ps

CONTAINER ID   IMAGE          COMMAND                  CREATED          STATUS          PORTS     NAMES

6b1e246bdc34   nginx:latest   “/docker-entrypoint.…”   18 minutes ago   Up 18 minutes   80/tcp    nginx-cluster.3.yofvioldzci3k4geve7lykyrs

0d6709372322   nginx:latest   “/docker-entrypoint.…”   18 minutes ago   Up 18 minutes   80/tcp    nginx-cluster.2.gp7szi6348r0gfakr6v1i42ga

Step 2

[root@vm2 ~]#

Step 3 

[root@vm2 ~]# systemctl stop docker

Warning: Stopping docker.service, but it can still be activated by:

  docker.socket

Step 4 

[root@vm2 ~]# systemctl status docker

● docker.service – Docker Application Container Engine

   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)

Step 5

[root@vm1 ~]# docker node ls

Error response from daemon: rpc error: code = DeadlineExceeded desc = context deadline exceeded

Step 6

[root@vm2 ~]# systemctl status docker

● docker.service – Docker Application Container Engine

   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)

   Active: active (running) since Sun 2022-01-23 13:22:38 JST; 28s ago

Step 7

[root@vm1 ~]# docker node ls

ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION

kdcrkd6sqteevq9jgy70fd0h0 *      vm1          Ready      Active           Leader               20.10.12

4hh92oj2meotbi0etnje15bzq       vm2          Ready      Active           Reachable            20.10.12

Step 8

[root@vm2 ~]# shutdown -h now

Step 9

[root@vm1 ~]# docker node ls

ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION

kdcrkd6sqteevq9jgy70fd0h0 *     vm1           Ready       Active           Leader                20.10.12

4hh92oj2meotbi0etnje15bzq      vm2           Ready       Active           Unreachable            20.10.12

Step 10

[root@vm1 ~]# docker ps

CONTAINER ID   IMAGE          COMMAND                  CREATED          STATUS          PORTS     NAMES

acc40b31df3f   nginx:latest   “/docker-entrypoint.…”   2 minutes ago    Up 2 minutes    80/tcp    nginx-cluster.3.ei48wkjvtmn53mfsjthlb52ef

d4635d8f2322   nginx:latest   “/docker-entrypoint.…”   2 minutes ago    Up 2 minutes    80/tcp    nginx-cluster.2.yyc2fmh73p23adbu44auuzf7r

3ddd0e479de6   nginx:latest   “/docker-entrypoint.…”   31 minutes ago   Up 31 minutes   80/tcp    nginx-cluster.1.6i1ncpnncxzxmsckwyahj9f7l

Step 11

[root@vm1 ~]# docker node ls

ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION

kdcrkd6sqteevq9jgy70fd0h0 *      vm1          Ready      Active            Leader               20.10.12

4hh92oj2meotbi0etnje15bzq       vm2          Ready       Active           Reachable            20.10.12

lp4i21pj0sij9yz81f7u8dzy7        vm3          Ready       Active                               20.10.12

Step 12

[root@vm1 ~]# ip -br a | grep enp0s8 | awk ‘{print $3}’

192.168.50.100/24

CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES

Step 13

[root@vm2 ~]# ip -br a | grep enp0s8 | awk ‘{print $3}’

192.168.50.120/24

Step 14

[root@vm3 ~]# ip -br a | grep enp0s8 | awk ‘{print $3}’

192.168.50.130/24

Step 15

[root@vm1 ~]# docker node ls

ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION

kdcrkd6sqteevq9jgy70fd0h0 *     vm1           Ready      Active           Leader               20.10.12

4hh92oj2meotbi0etnje15bzq      vm2           Ready      Active           Reachable            20.10.12

lp4i21pj0sij9yz81f7u8dzy7       vm3           Ready      Active                               20.10.12

Step 16

[root@vm1 ~]# docker service rm r4v6w094yxl3

r4v6w094yxl3

Step 17

                [root@vm1 ~]# docker service create –replicas 6 -p 10080:80 –network swarm-net –name nginx-cluster nginx                kzdm5zhgt1eo9goxy0rjwmklm

overall progress: 6 out of 6 tasks

1/6: running   [================================================è]

2/6: running   [================================================è]

3/6: running   [================================================è]

4/6: running   [================================================è]

5/6: running   [================================================è]

6/6: running   [================================================è]

verify: Service converged

Step 18

[root@vm1 ~]#

Step 19

[root@vm1 ~]# docker service ls

ID             NAME            MODE         REPLICAS   IMAGE          PORTS

kzdm5zhgt1eo   nginx-cluster       replicated        6/6         nginx:latest       *:10080->80/tcp

Step 20

[root@vm3 ~]# docker service ls

Error response from daemon: This node is not a swarm manager. Worker nodes can’t be used to view or modify cluster state. Please run this command on a manager node or promote the current node to a manager.

Step 21

[root@vm1 ~]# docker ps

CONTAINER ID   IMAGE          COMMAND                  CREATED          STATUS          PORTS     NAMES

31177737f781   nginx:latest   “/docker-entrypoint.…”   28 seconds ago   Up 25 seconds   80/tcp    nginx-cluster.5.9a7wd7yqo4lweayw3352kssru

791b93b799d8   nginx:latest   “/docker-entrypoint.…”   28 seconds ago   Up 25 seconds   80/tcp    nginx-cluster.2.bcrt9chkfrmbjyy30n4g3il71

Step 22

[root@vm2 ~]# docker ps

CONTAINER ID   IMAGE          COMMAND                  CREATED         STATUS         PORTS     NAMES

4a59afd4d2d2   nginx:latest   “/docker-entrypoint.…”   3 seconds ago   Up 2 seconds   80/tcp    nginx-cluster.1.c46oo5nzbchhbn6v07rft9d5b

bac746d67e4f   nginx:latest   “/docker-entrypoint.…”   4 seconds ago   Up 2 seconds   80/tcp    nginx-cluster.4.71wn1fz6yojbzunktws2qqslg

Step 23

[root@vm3 ~]# docker ps

CONTAINER ID   IMAGE          COMMAND                  CREATED         STATUS         PORTS     NAMES

f0985bc21d07   nginx:latest   “/docker-entrypoint.…”   7 seconds ago   Up 5 seconds   80/tcp    nginx-cluster.6.vvykeb5g04gihc5jg4la26903

d14c20577a85   nginx:latest   “/docker-entrypoint.…”   7 seconds ago   Up 5 seconds   80/tcp    nginx-cluster.3.qwccv5u0jlwfp5txp5593d8cc

10. 删除某容器

Step 1

[root@vm1 ~]# docker rm $(docker stop nginx-cluster-2.2.qs4tlxp1ilbtyd95xhkusjg48)

nginx-cluster-2.2.qs4tlxp1ilbtyd95xhkusjg48

Step 2

[root@vm2 ~]# docker ps

CONTAINER ID   IMAGE          COMMAND                  CREATED          STATUS          PORTS     NAMES

719e0941efa4   nginx:latest   "/docker-entrypoint.…"   20 seconds ago   Up 15 seconds   80/tcp    nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688

11. 删除服务

Step 1

[root@vm1 ~]# docker service --help

Usage:  docker service COMMAND

Manage services

Commands:

  create      Create a new service

  inspect     Display detailed information on one or more services

  logs        Fetch the logs of a service or task

  ls          List services

  ps          List the tasks of one or more services

  rm          Remove one or more services

  rollback    Revert changes to a service's configuration

  scale       Scale one or multiple replicated services

  update      Update a service

Step 2

[root@vm1 ~]# docker service rm nginx-cluster

nginx-cluster

Step 3 

[root@vm1 ~]# docker service ls

ID             NAME              MODE         REPLICAS   IMAGE          PORTS

sfcq2vc5orxs    nginx-cluster-2        replicated       2/2          nginx:latest       *:10081->80/tcp

Step 4 

[root@vm1 ~]# docker ps

CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES

Step 5

[root@vm1 ~]# docker ps -a

CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES

Step 6

[root@vm1 ~]#

Step 7

[root@vm2 ~]# docker ps

CONTAINER ID   IMAGE          COMMAND                  CREATED         STATUS              PORTS     NAMES

719e0941efa4   nginx:latest   "/docker-entrypoint.…"   2 minutes ago   Up About a minute   80/tcp    nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688

Step 8

[root@vm2 ~]# docker ps -a

CONTAINER ID   IMAGE          COMMAND                  CREATED         STATUS              PORTS     NAMES

719e0941efa4   nginx:latest   "/docker-entrypoint.…"   2 minutes ago   Up About a minute   80/tcp    nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688

12. 手动停止容器

Step 1

[root@vm2 ~]# docker ps

CONTAINER ID   IMAGE          COMMAND                  CREATED         STATUS         PORTS     NAMES

719e0941efa4   nginx:latest   "/docker-entrypoint.…"   4 minutes ago   Up 4 minutes   80/tcp    nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688

Step 2

[root@vm2 ~]# docker stop nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688

nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688

Step 3 

[root@vm2 ~]# docker ps

CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES

Step 4 

[root@vm2 ~]#

Step 5

[root@vm2 ~]# docker ps

CONTAINER ID   IMAGE          COMMAND                  CREATED          STATUS          PORTS     NAMES

598c665b7a8e   nginx:latest   "/docker-entrypoint.…"   19 seconds ago   Up 13 seconds   80/tcp    nginx-cluster-2.2.qmgbury0aixey2sozive6377l

Step 6

[root@vm2 ~]# docker ps -a

CONTAINER ID   IMAGE          COMMAND                  CREATED          STATUS                      PORTS     NAMES

598c665b7a8e   nginx:latest   "/docker-entrypoint.…"   24 seconds ago   Up 18 seconds               80/tcp    nginx-cluster-2.2.qmgbury0aixey2sozive6377l

719e0941efa4   nginx:latest   "/docker-entrypoint.…"   5 minutes ago    Exited (0) 24 seconds ago             nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688

Step 7

[root@vm2 ~]# docker service ls

ID             NAME              MODE         REPLICAS   IMAGE          PORTS

sfcq2vc5orxs    nginx-cluster-2       replicated        2/2          nginx:latest       *:10081->80/tcp

Step 8

[root@vm2 ~]# docker start nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688

nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688

Step 9

[root@vm2 ~]# docker ps

CONTAINER ID   IMAGE          COMMAND                  CREATED         STATUS         PORTS     NAMES

598c665b7a8e   nginx:latest   "/docker-entrypoint.…"   2 minutes ago   Up 2 minutes   80/tcp    nginx-cluster-2.2.qmgbury0aixey2sozive6377l

719e0941efa4   nginx:latest   "/docker-entrypoint.…"   7 minutes ago   Up 1 second    80/tcp    nginx-cluster-2.2.qc5tkdzdm0rcs0v57ue52n688

Step 10

[root@vm2 ~]# docker service ls

ID             NAME              MODE         REPLICAS   IMAGE          PORTS

sfcq2vc5orxs     nginx-cluster-2      replicated        2/2          nginx:latest       *:10081->80/tcp

Step 11

[root@vm2 ~]# docker service rm $(docker service ls -q)

sfcq2vc5orxs

Step 12

[root@vm2 ~]# docker service ls

ID        NAME      MODE      REPLICAS   IMAGE     PORTS

Step 13

[root@vm2 ~]# docker ps

CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES

Step 14

[root@vm2 ~]# docker ps -a

CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES

Step 15

[root@vm2 ~]#

13. 离开集群

Step 1

[root@vm3 ~]# docker swarm leave

Error response from daemon: You are attempting to leave the swarm on a node that is participating as a manager. The only way to restore a swarm that has lost consensus is to reinitialize it with `--force-new-cluster`. Use `--force` to suppress this message.

Step 2

[root@vm3 ~]# docker swarm leave --force

Step 3 

[root@vm3 ~]# docker node ls

Error response from daemon: This node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again.

14. 删除集群

Step 1

[root@vm1 ~]# docker swarm leave --force

Node left the swarm.

Step 2

[root@vm1 ~]# docker node ls

Error response from daemon: This node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again.

二、Kubernetes集群的环境搭建与试用

1. 安装Docker

Step 1

wget -P /etc/yum.repos.d/ https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

Step 2

yum install -y docker-ce-18.06.0.ce-3.el7.x86_64

Step 3

systemctl start docker.service

Step 4

systemctl enable docker.service

2. 安装Kubernetes

Step 1

cat <<EOF > /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

EOF

Step 2

yum install -y kubelet-1.12.3

yum install -y kubeadm-1.12.3

yum install -y kubectl-1.12.3

3. 获取镜像

Step 1

docker save -o k8s-1.12.3.tar k8s.gcr.io/kube-proxy:v1.12.3  k8s.gcr.io/kube-apiserver:v1.12.3 k8s.gcr.io/kube-controller-manager:v1.12.3 k8s.gcr.io/kube-scheduler:v1.12.3  k8s.gcr.io/etcd:3.2.24   k8s.gcr.io/coredns:1.2.2  quay.io/coreos/flannel:v0.10.0-amd64   k8s.gcr.io/pause:3.1 

Step 2

docker load -i k8s-1.12.3.tar

4. 禁用节点上的Swap

Step 1

swapoff -a

Step 2

sysctl -p

Step 3

vim /ets/fstab

5. 开启路由转发功能以及iptables的过滤策略

Step 1

vim /etc/sysctl.d/k8s.conf

Step 2

net.bridge.bridge-nf-call-ip6tables = 1

Step 3

net.bridge.bridge-nf-call-iptables = 1

Step 4

net.ipv4.ip_forward = 1

Step 5

modprobe br_netfilter

Step 6

sysctl -p /etc/sysctl.d/k8s.conf

6. 初始化Master节点

Step 1

kubeadm init  --kubernetes-version=v1.12.3  --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.6.6.110

7. 从节点加入

Step 1

kubeadm join 10.6.6.192:6443 --token afbkdo.6335xh1w0lv7odbh --discovery-token-ca-cert-hash sha256:b9abe5a668609f0225c8bb3ecba3a70a0be370f90905fcce79a6d783bbd0aeef

8. 配置主节点是否参与调度

Step 1

kubectl taint nodes master.k8s node-role.kubernetes.io/master-

Step 2

kubectl taint nodes master.k8s node-role.kubernetes.io/master=:NoSchedule

9. 开启非安全端口访问

Step 1

- --secure-port=6443

Step 2

- --insecure-bind-address=0.0.0.0

Step 3

- --insecure-port=8080

10. 配置证书续期

Step 1

- --kubeconfig=/etc/kubernetes/controller-manager.conf

Step 2

- --experimental-cluster-signing-duration=87600h0m0s

Step 3

- --feature-gates=RotateKubeletServerCertificate=true

有关Docker Swarm集群与Kubernetes的搭建与试用的更多相关文章

  1. 华为OD机试用Python实现 -【明明的随机数】 2023Q1A - 2

    华为OD机试题本篇题目:明明的随机数题目输入描述输出描述:示例1输入输出说明代码编写思路最近更新的博客华为od2023|什么是华为od,od薪资待遇,od机试题清单华为OD机试真题大全,用Python解华为机试题|机试宝典【华为OD机试】全流程解析+经验分享,题型分享,防作弊指南华为o

  2. 阿里云国际版免费试用:如何注册以及注意事项 - 2

    作为新的阿里云用户,您可以50免费试用多种优惠,价值高达1,700美元(或8,500美元)。这将让您了解和体验阿里云平台上提供的一系列产品和服务。如果您以个人身份注册免费试用,您将获得价值1,700美元的优惠。但是,如果您是注册公司,您可以选择企业免费试用,提交基本信息通过企业实名注册验证,即可开始价值$8,500的免费试用!本教程介绍了如何设置您的帐户并使用您的免费试用版。​关于免费试用在我们开始此试用之前,您还必须遵守以下条款和条件才能访问您的免费试用:只有在一年内创建的账户才有资格获得阿里云免费试用。通过此免费试用优惠,用户可以免费试用免费试用活动页面上列出的每种产品一次。如果您有多个帐

  3. 【详解】Docker安装Elasticsearch7.16.1集群 - 2

    开门见山|拉取镜像dockerpullelasticsearch:7.16.1|配置存放的目录#存放配置文件的文件夹mkdir-p/opt/docker/elasticsearch/node-1/config#存放数据的文件夹mkdir-p/opt/docker/elasticsearch/node-1/data#存放运行日志的文件夹mkdir-p/opt/docker/elasticsearch/node-1/log#存放IK分词插件的文件夹mkdir-p/opt/docker/elasticsearch/node-1/plugins若你使用了moba,直接右键新建即可如上图所示依次类推创建

  4. ruby-on-rails - 为 ruby​​ on rails web 应用程序创建试用期 - 2

    谁能告诉我实现ruby​​onrailsweb应用程序30试用期的最佳方法,很像Basecampfrom37signals的方式是吗?目前我有一个用户登录页面,然后用户可以访问显示有关其产品/定价等的当前信息的仪表板。我希望用户能够注册并拥有完整的应用程序功能,然后在30天后过期。谢谢 最佳答案 创建用于x天试用期的Rails应用程序非常容易。您想为您的用户实现30天的试用期,然后执行以下操作:第1步:在application_controller.rb中创建这些方法,例如#application_controller.rbclas

  5. 关于ES集群信息的一些查看 - 2

    文章目录查看ES信息查看节点信息查看分片信息实际场景下ES分片及副本数量应该怎么分关于ES的灵活使用查看ES信息查看版本kibana:GET/查看节点信息GET/_cat/nodes?v解释:ip:集群中节点的ip地址;heap.percent:堆内存的占用百分比;ram.percent:总内存的占用百分比,其实这个不是很准确,因为buff/cache和available也被当作使用内存;cpu:cpu占用百分比;load_1m:1分钟内cpu负载;load_5m:5分钟内cpu负载;load_15m:15分钟内cpu负载;node.role:上图的dilmrt代表全部权限master:*代表

  6. linux查看es节点使用情况,elasticsearch(es) 如何查看当前集群中哪个节点是主节点(master) - 2

    elasticsearch查看当前集群中的master节点是哪个需要使用_cat监控命令,具体如下。查看方法es主节点确定命令,以kibana上查看示例如下:GET_cat/nodesv返回结果示例如下:ipheap.percentram.percentcpuload_1mload_5mload_15mnode.rolemastername172.16.16.188529952.591.701.45mdi-elastic3172.16.16.187329950.990.991.19mdi-elastic2172.16.16.231699940.871.001.03mdi-elastic4172

  7. LinuxGUI自动化测试框架搭建(二十二)-框架主入口main.py设计&log日志调用 - 2

    (二十二)-框架主入口main.py设计&log日志调用和生成1测试目的2测试需求3需求分析4详细设计4.1新建存放日志目录log4.1.1配置config.py中写入log的目录4.2`baseInfo.py`中加入日志4.3`test_gedit.py`中加入日志4.4主函数入口main.py中调用日志5调用日志主函数main.py源码6`baseInfo.py`源码7`test_gedit.py`源码8运行效果9目前框架结构1测试目的组织运行所有的测试用例,并调用日志模块,便于问题定位。

  8. 基于ActiveMQ搭建MQTT服务备忘(二):webapp集成 - 2

    (1)为什么写这个话题(Why)读万卷书不如行千里路。这次搭建MQTT服务,遇到了一些误解,特此记录备忘。主要包括:(1)服务(Broker)的账户管理与网页管理平台的账户(2)与web应用的集成(Spring系)(2)ActiveMQ版本选择因为JAVA环境是JDK8,所以按兼容性考虑选择了ActiveMQ5.15的最后版本5.15.15。如果你是JDK11则可考虑ActiveMQ的最新版本5.17或5.18。ActiveMQ支持MQTTv3.1.1andv3.1。(3)ActiveMQ与web应用的集成主要介绍与Spring系的webapp集成(SpringBoot和SpringMVC)。

  9. kubernetes集群划分节点 - 2

    Kubernetes(K8s)是一个用于管理容器化应用程序的开源平台,可以帮助开发人员更轻松地部署、管理和扩展应用程序。在Kubernetes中,集群划分是一种重要的概念,可以帮助我们更好地组织和管理集群中的节点和资源。本文将介绍如何使用Kubernetes对集群进行划分,并提供详细的操作示例,希望能够帮助读者更好地了解和使用Kubernetes平台。Node划分Node划分是将集群中的节点按照一定的规则进行划分。在Kubernetes中,可以使用NodeSelector和Affinity机制来实现Node划分。NodeSelectorNodeSelector是一种将Pod调度到符合特定节点标

  10. 【微服务笔记23】使用Spring Cloud微服务组件从0到1搭建一个微服务工程 - 2

    这篇文章,主要介绍如何使用SpringCloud微服务组件从0到1搭建一个微服务工程。目录一、从0到1搭建微服务工程1.1、基础环境说明(1)使用组件(2)微服务依赖1.2、搭建注册中心(1)引入依赖(2)配置文件(3)启动类1.3、搭建配置中心(1)引入依赖(2)配置文件(3)启动类1.4、搭建API网关(1)引入依赖(2)配置文件(3)启动类1.5、搭建服务提供者(1)引入依赖(2)配置文件(3)启动类1.6、搭建服务消费者(1)引入依赖(2)配置文件(3)启动类1.7、运行测试一、从0到1搭建微服务工程1.1、基础环境说明(1)使用组件这里主要是使用的SpringCloudNetflix

随机推荐