k8s 的更新策略为滚动更新,通过新创建的RS(Replica Set)创建新的pod,等新的pod调度完成显示running,然后terminating掉老的RS下的pod,循环往复直至完成全部新pod的更新。
[root@master1 ~]# kubectl rollout history deployment myapp-v1
deployment.apps/myapp-v1
REVISION CHANGE-CAUSE
1 <none>
2 <none>
[root@master1 ~]#
[root@master1 ~]# kubectl rollout undo deployment myapp-v1 --to-revision=1
deployment.apps/myapp-v1 rolled back
[root@master1 ~]# kubectl rollout history deployment myapp-v1
deployment.apps/myapp-v1
REVISION CHANGE-CAUSE
2 <none>
3 <none>
[root@master1 ~]# kubectl describe deployment myapp-v1
Name: myapp-v1
Namespace: default
CreationTimestamp: Tue, 06 Sep 2022 11:00:16 +0800
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 3
Selector: app=myapp,version=v1
Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=myapp
version=v1
Containers:
myapp:
Image: janakiramm/myapp:v1
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: myapp-v1-8448d48797 (3/3 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 22m deployment-controller Scaled up replica set myapp-v1-8448d48797 to 2
Normal ScalingReplicaSet 17m deployment-controller Scaled up replica set myapp-v1-8448d48797 to 4
Normal ScalingReplicaSet 15m deployment-controller Scaled down replica set myapp-v1-8448d48797 to 3
Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set myapp-v1-69d5787956 to 1
Normal ScalingReplicaSet 11m deployment-controller Scaled down replica set myapp-v1-8448d48797 to 2
Normal ScalingReplicaSet 11m deployment-controller Scaled up replica set myapp-v1-69d5787956 to 2
Normal ScalingReplicaSet 10m deployment-controller Scaled down replica set myapp-v1-8448d48797 to 1
Normal ScalingReplicaSet 10m deployment-controller Scaled up replica set myapp-v1-69d5787956 to 3
Normal ScalingReplicaSet 10m deployment-controller Scaled down replica set myapp-v1-8448d48797 to 0
Normal ScalingReplicaSet 2m16s deployment-controller Scaled up replica set myapp-v1-8448d48797 to 1
Normal ScalingReplicaSet 95s (x4 over 2m2s) deployment-controller (combined from similar events): Scaled up replica set myapp-v1-8448d48797 to 3
Normal ScalingReplicaSet 9s deployment-controller Scaled down replica set myapp-v1-69d5787956 to 0
[root@master1 ~]# kubectl describe deployment myapp-v1
Name: myapp-v1
Namespace: default
CreationTimestamp: Tue, 06 Sep 2022 11:00:16 +0800
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 3
Selector: app=myapp,version=v1
Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=myapp
version=v1
Containers:
myapp:
Image: janakiramm/myapp:v1
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: myapp-v1-8448d48797 (3/3 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 35m deployment-controller Scaled up replica set myapp-v1-8448d48797 to 2
Normal ScalingReplicaSet 30m deployment-controller Scaled up replica set myapp-v1-8448d48797 to 4
Normal ScalingReplicaSet 28m deployment-controller Scaled down replica set myapp-v1-8448d48797 to 3
Normal ScalingReplicaSet 24m deployment-controller Scaled up replica set myapp-v1-69d5787956 to 1
Normal ScalingReplicaSet 24m deployment-controller Scaled down replica set myapp-v1-8448d48797 to 2
Normal ScalingReplicaSet 24m deployment-controller Scaled up replica set myapp-v1-69d5787956 to 2
Normal ScalingReplicaSet 23m deployment-controller Scaled down replica set myapp-v1-8448d48797 to 1
Normal ScalingReplicaSet 23m deployment-controller Scaled up replica set myapp-v1-69d5787956 to 3
Normal ScalingReplicaSet 23m deployment-controller Scaled down replica set myapp-v1-8448d48797 to 0
Normal ScalingReplicaSet 15m deployment-controller Scaled up replica set myapp-v1-8448d48797 to 1
Normal ScalingReplicaSet 14m (x4 over 15m) deployment-controller (combined from similar events): Scaled up replica set m
Normal ScalingReplicaSet 13m deployment-controller Scaled down replica set myapp-v1-69d5787956 to 0
[root@master1 ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
myapp-v1-8448d48797-7cn4p 1/1 Running 0 15m app=myapp,pod-template-hash=8448d48797,version=v1
myapp-v1-8448d48797-7mhxk 1/1 Running 0 15m app=myapp,pod-template-hash=8448d48797,version=v1
myapp-v1-8448d48797-fkb46 1/1 Running 0 15m app=myapp,pod-template-hash=8448d48797,version=v1
[root@master1 ~]# kubectl get pods -l app=myapp -w
NAME READY STATUS RESTARTS AGE
myapp-v1-8448d48797-phjwf 1/1 Running 0 4m25s
myapp-v1-8448d48797-r5sn8 1/1 Running 0 9m40s
myapp-v1-8448d48797-vz4jj 1/1 Running 0 9m37s
# 更新了镜像,查看pod的创建过程
myapp-v1-69d5787956-x2vtr 0/1 Pending 0 0s
myapp-v1-69d5787956-x2vtr 0/1 Pending 0 0s
myapp-v1-69d5787956-x2vtr 0/1 ContainerCreating 0 4s
myapp-v1-69d5787956-x2vtr 0/1 ContainerCreating 0 13s
myapp-v1-69d5787956-x2vtr 1/1 Running 0 24s
myapp-v1-8448d48797-phjwf 1/1 Terminating 0 6m2s
myapp-v1-69d5787956-vcjsb 0/1 Pending 0 0s
myapp-v1-69d5787956-vcjsb 0/1 Pending 0 0s
myapp-v1-69d5787956-vcjsb 0/1 ContainerCreating 0 0s
myapp-v1-69d5787956-vcjsb 0/1 ContainerCreating 0 13s
myapp-v1-8448d48797-phjwf 0/1 Terminating 0 6m16s
myapp-v1-8448d48797-phjwf 0/1 Terminating 0 6m19s
myapp-v1-8448d48797-phjwf 0/1 Terminating 0 6m20s
myapp-v1-69d5787956-vcjsb 1/1 Running 0 24s
myapp-v1-8448d48797-vz4jj 1/1 Terminating 0 11m
myapp-v1-69d5787956-qq58n 0/1 Pending 0 5s
myapp-v1-69d5787956-qq58n 0/1 Pending 0 5s
myapp-v1-69d5787956-qq58n 0/1 ContainerCreating 0 11s
myapp-v1-69d5787956-qq58n 0/1 ContainerCreating 0 25s
myapp-v1-8448d48797-vz4jj 0/1 Terminating 0 12m
myapp-v1-8448d48797-vz4jj 0/1 Terminating 0 12m
myapp-v1-8448d48797-vz4jj 0/1 Terminating 0 12m
myapp-v1-69d5787956-qq58n 1/1 Running 0 31s
myapp-v1-8448d48797-r5sn8 1/1 Terminating 0 12m
myapp-v1-8448d48797-r5sn8 0/1 Terminating 0 12m
myapp-v1-8448d48797-r5sn8 0/1 Terminating 0 12m
myapp-v1-8448d48797-r5sn8 0/1 Terminating 0 12m
## 执行了回滚版本V1,查看新pod创建过程
myapp-v1-8448d48797-7cn4p 0/1 Pending 0 0s
myapp-v1-8448d48797-7cn4p 0/1 Pending 0 0s
myapp-v1-8448d48797-7cn4p 0/1 ContainerCreating 0 0s
myapp-v1-8448d48797-7cn4p 0/1 ContainerCreating 0 8s
myapp-v1-8448d48797-7cn4p 1/1 Running 0 12s
myapp-v1-69d5787956-qq58n 1/1 Terminating 0 8m53s
myapp-v1-8448d48797-7mhxk 0/1 Pending 0 0s
myapp-v1-8448d48797-7mhxk 0/1 Pending 0 2s
myapp-v1-8448d48797-7mhxk 0/1 ContainerCreating 0 3s
myapp-v1-8448d48797-7mhxk 0/1 ContainerCreating 0 15s
myapp-v1-69d5787956-qq58n 0/1 Terminating 0 9m9s
myapp-v1-69d5787956-qq58n 0/1 Terminating 0 9m10s
myapp-v1-69d5787956-qq58n 0/1 Terminating 0 9m10s
myapp-v1-8448d48797-7mhxk 1/1 Running 0 22s
myapp-v1-69d5787956-vcjsb 1/1 Terminating 0 9m43s
myapp-v1-8448d48797-fkb46 0/1 Pending 0 0s
myapp-v1-8448d48797-fkb46 0/1 Pending 0 1s
myapp-v1-8448d48797-fkb46 0/1 ContainerCreating 0 12s
myapp-v1-69d5787956-vcjsb 0/1 Terminating 0 10m
myapp-v1-69d5787956-vcjsb 0/1 Terminating 0 10m
myapp-v1-8448d48797-fkb46 0/1 ContainerCreating 0 45s
myapp-v1-69d5787956-vcjsb 0/1 Terminating 0 10m
myapp-v1-8448d48797-fkb46 1/1 Running 0 59s
myapp-v1-69d5787956-x2vtr 1/1 Terminating 0 11m
myapp-v1-69d5787956-x2vtr 0/1 Terminating 0 11m
myapp-v1-69d5787956-x2vtr 0/1 Terminating 0 11m
myapp-v1-69d5787956-x2vtr 0/1 Terminating 0 11m
[root@master1 deployment]# kubectl describe deployment myapp-v1 -n default
Name: myapp-v1
Namespace: default
CreationTimestamp: Tue, 06 Sep 2022 11:00:16 +0800
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 3
Selector: app=myapp,version=v1
Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=myapp
version=v1
Containers:
myapp:
Image: janakiramm/myapp:v1
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: myapp-v1-8448d48797 (3/3 replicas created)
Events: <none>
[root@master1 deployment]# kubectl patch deployment myapp-v1 -p '{"spec":{"strategy":{"rollingUpdate":{"maxSurge":1,"maxUnavailable":1}}}}' -n default
deployment.apps/myapp-v1 patched
[root@master1 deployment]# kubectl describe deployment myapp-v1 -n default
Name: myapp-v1
Namespace: default
CreationTimestamp: Tue, 06 Sep 2022 11:00:16 +0800
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 3
Selector: app=myapp,version=v1
Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
Pod Template:
Labels: app=myapp
version=v1
Containers:
myapp:
Image: janakiramm/myapp:v1
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: myapp-v1-8448d48797 (3/3 replicas created)
Events: <none>
上面可以看到RollingUpdateStrategy: 1 max unavailable, 1 max surge
这个rollingUpdate更新策略变成了刚才设定的,因为我们设定的pod副本数是3,1和1表示最少不能少于2个pod,最多不能超过4个pod 。
这个就是通过控制RollingUpdateStrategy这个字段来设置滚动更新策略的
apiVersion: apps/v1
kind: Deployment
metadata:
name: portal
namespace: ms
spec:
replicas: 1
selector:
matchLabels:
project: ms
app: portal
template:
metadata:
labels:
project: ms
app: portal
spec:
containers:
- name: portal
image: xianchao/portal:v1
imagePullPolicy: Always
ports:
- protocol: TCP
containerPort: 8080
resources: #资源配额
limits: #资源限制,最多可用的cpu和内存
cpu: 1
memory: 1Gi
requests: #最少需要多少资源才可以运行Pod
cpu: 0.5
memory: 1Gi
readinessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 60
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 60
periodSeconds:
livenessProbe: 存活性探测
用于判断容器是否存活,即Pod是否为running状态,如果LivenessProbe探针探测到容器不健康,则kubelet将kill掉容器,并根据容器的重启策略是否重启。如果一个容器不包含LivenessProbe探针,则Kubelet认为容器的LivenessProbe探针的返回值永远成功。
tcpSocket:
port: 8080 #检测8080端口是否存在
initialDelaySeconds: 60 #Pod启动60s执行第一次检查
periodSeconds: 10 #第一次检查后每隔10s检查一次
readinessProbe: 就绪性探测
有时候应用程序可能暂时无法接受请求,比如Pod已经Running了,但是容器内应用程序尚未启动成功,在这种情况下,如果没有ReadinessProbe,则Kubernetes认为它可以处理请求了,然而此时,我们知道程序还没启动成功是不能接收用户请求的,所以不希望kubernetes把请求调度给它,则使用ReadinessProbe探针。
ReadinessProbe和livenessProbe可以使用相同探测方式,只是对Pod的处置方式不同,ReadinessProbe是将Pod IP:Port从对应的EndPoint列表中删除,而livenessProbe则Kill容器并根据Pod的重启策略来决定作出对应的措施。ReadinessProbe探针探测容器是否已准备就绪,如果未准备就绪则kubernetes不会将流量转发给此Pod。在Pod运行过程中,K8S仍然会每隔10s检测8080端口
文章目录一、污点(Taint)1、污点简介2、污点的组成3、污点的设置和去除二、容忍(Tolerations)1、容忍简介2、容忍的基本用法3、示例4、多污点与多容忍配置三、警戒(cordon)和转移(drain)四、Pod启动阶段(相位phase)五、故障排除步骤一、污点(Taint)节点亲和性,是Pod的一种属性(偏好或硬性要求),它使Pod被吸引到一类特定的节点Taint则相反,它使节点能够排斥一类特定的PodTaint和Toleration相互配合,可以用来避免Pod被分配到不合适的节点上。每个节点上都可以应用一个或多个taint,这表示对于那些不能容忍这些taint的Pod,是不会被
文章目录Kubernetes(k8s)工作负载一、Workloads二、Pod三、Deployment四、RC、RS、DaemonSet、StatefulSet五、Job、CronJob1、Job2、CronJob六、GCKubernetes(k8s)工作负载一、Workloads什么是工作负载(Workloads)工作负载是运行在Kubernetes上的一个应用程序。一个应用很复杂,可能由单个组件或者多个组件共同完成。无论怎样我们可以用一组Pod来表示一个应用,也就是一个工作负载Pod又是一组容器(Containers)所以关系又像是这样工作负载(Workloads)控制一组PodPod控制
前言 前端时间PHP项目部署升级需要,需要把Laravel开发的项目部署K8s上,下面以laravel项目为例,讲解采用yaml文件方式部署项目。一、部署步骤1.创建Dockerfile文件Dockerfile是一个用来构建镜像的文本文件,在容器运行时,需要把项目文件和项目运行所必须的组件安装其中。#基础镜像FROMphp:7.4-fpm#时区ARGTZ=Asia/Shanghai#更换容器时区RUNcp"/usr/share/zoneinfo/$TZ"/etc/localtime&&echo"$TZ">/etc/timezone#替换成阿里apt-get源RUNsed-i"s@http
目录前言安装containerd解压安装配置成systemd任务安装runc编辑安装cni配置containerd镜像源containerd基本使用拓展阅读nerdctl工具安装及使用整体脚本总结写在后面前言上一篇文章,我们介绍了虚拟机的基础环境以及基础的网络配置,还有一些k8s节点要用到基础环境配置。本文将带领大家把containerd给安装了containerd的项目官方地址https://github.com/containerd/containerdcontainerd的发布版本地址如下https://github.com/containerd/containerd/releases
文章目录一.k8s集群修改config1.1备份当前k8s集群配置文件1.2删除当前k8s集群的apiserver的cert和key1.3生成新的apiserver的cert和key1.4刷新admin.conf1.5重启apiserver1.6刷新.kube/config二.安装kubectl2.1下载kubectl2.2配置kubectl三.使用kubernetes-client操作k8s集群3.1依赖3.2注意(可忽略)3.3创建StatefulSet3.4运行shell命令3.5删除StatefulSet3.6线上运行注意一.k8s集群修改config因为默认的是内网IP,复制出来后,
k8sissue: error:Readinessprobefailed:HTTPprobefailedwithstatuscode:503explanation:Kubernetes为准备和活动探测返回HTTP503错误的事实意味着到后端的连接可能有问题。有趣的是,这不是重点。这些探针不是用来执行HTTP流的端到端测试的。探测只用于验证它们所监视的服务是否响应。简单地说,好的是自己设置的readiness探针(probe)起作用了,不好的是,自己的配置文件可能有一些其他方面的问题。具体是什么方面的问题呢?就是创建出来的container里的报错信息Read-onlyfilesystem/xx
日志收集介绍日志收集的目的:分布式日志数据统一收集,实现集中式查询和管理故障排查安全信息和事件管理报表统计及展示功能日志收集的价值:日志查询、问题排查、故障恢复和故障自愈应用日志分析,错误报警性能分析,用户行为分析k8s常用的日志收集方式:在节点上进行收集,基于daemonset部署日志收集容器,实现json-file类型(标准输出/dev/stdout,错误输出/dev/stderr)日志收集使用sidecar容器收集当前Pod内一个或多个业务容器的日志,通常基于emptyDir实现业务容器与sidecar容器之间的日志共享在容器内内置日志收集进程ES集群部署使用主机如下:IP主机名角色19
文章目录概述认证认证插件基于静态token的认证服务实践基于X509证书认证实践基于webhook认证实践鉴权k8s中RBAC的使用授权实践准入场景配额管理实践插件插件开发限流APIPriorityandFairnessAPF中的排队FlowSchema与PriorityLevelConfiguration(队列权重配置)调试命令概述kube-apiserver是k8s最重要的控制组件之一,主要提供以下功能:提供集群管理的RESTAPI接口,包括认证授权、数据校验以及集群状态变更等k8s中所有模块与etcd的数据交互都需要走APIServer,禁止直接和etcd通信APIServer请求流程概
文章目录01引言02DNS服务在k8s的发展2.1SkyDNS2.2KubeDNS2.3CoreDNS03搭建CoreDNS服务3.1修改每个Node上kubelet的DNS启动参数3.2部署CoreDNS服务3.2.1ConfigMap3.2.2Deployment3.2.3Service04服务名的DNS解析05CoreDNS配置5.1示例一:设置插件5.2示例二:自定义域名5.3示例三:转发域名查询到上游DNS服务器上06引言01引言声明:本文为《Kubernetes权威指南:从Docker到Kubernetes实践全接触(第5版)》的读书笔记作为服务发现机制的基本功能,在集群内需要能够
Kubernetes声明式对象的增删改查前言一、创建对象二、更新对象三、删除对象四、查看对象总结前言我们可以通过在一个目录中存储多个对象配置文件、并使用kubectlapply来递归地创建和更新对象来创建、更新和删除Kubernetes对象。这种方法会保留对现有对象已作出的修改,而不会将这些更改写回到对象配置文件中。kubectldiff也会给你呈现apply将作出的变更的预览。一、创建对象使用kubectlapply来创建指定目录中配置文件所定义的所有对象,除非对应对象已经存在:$kubectlapply-f/此操作会在每个对象上设置kubectl.kubernetes.io/last-ap