本次的目的是通过使用k8s搭建一个三节点的zookeeper集群,因为zookeeper集群需要用到存储,所以我们需要准备三个持久卷(Persistent Volume) 简称就是PV。
分别对应三节点zk集群中的三个pod的持久化目录,创建好目录之后编写yaml创建zk-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: k8s-pv-zk01
namespace: tools
labels:
app: zk
annotations:
volume.beta.kubernetes.io/storage-class: "anything"
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /data/share/pv/zk01
persistentVolumeReclaimPolicy: Recycle
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: k8s-pv-zk02
namespace: tools
labels:
app: zk
annotations:
volume.beta.kubernetes.io/storage-class: "anything"
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /data/share/pv/zk02
persistentVolumeReclaimPolicy: Recycle
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: k8s-pv-zk03
namespace: tools
labels:
app: zk
annotations:
volume.beta.kubernetes.io/storage-class: "anything"
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /data/share/pv/zk03
persistentVolumeReclaimPolicy: Recycle
---
使用如下命令创建zk-pv
核心命令:kubectl create -f zk-pv.yaml / kubectl apply -f zk-pv.yaml
示例:
[root@master ~]# mkdir chaitc-zookeeper
[root@master ~]# cd chaitc-zookeeper/
[root@master chaitc-zookeeper]# vim zk-pv.yaml
[root@master chaitc-zookeeper]# kubectl create -f zk-pv.yaml
persistentvolume/k8s-pv-zk01 created
persistentvolume/k8s-pv-zk02 created
persistentvolume/k8s-pv-zk03 created
验证:
命令:kubectl get pv -n tools -o wide

使用statefulset去部署zk集群的三节点,并且使用刚刚创建的pv作为存储设备。
zk.yaml
apiVersion: v1
kind: Service
metadata:
name: zk-hs
namespace: tools
labels:
app: zk
spec:
selector:
app: zk
clusterIP: None
ports:
- name: server
port: 2888
- name: leader-election
port: 3888
---
apiVersion: v1
kind: Service
metadata:
name: zk-cs
namespace: tools
labels:
app: zk
spec:
selector:
app: zk
type: NodePort
ports:
- name: client
port: 2181
nodePort: 31811
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: zk-pdb
namespace: tools
spec:
selector:
matchLabels:
app: zk
maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zk
namespace: tools
spec:
selector:
matchLabels:
app: zk # has to match .spec.template.metadata.labels
serviceName: "zk-hs"
replicas: 3 # by default is 1
updateStrategy:
type: RollingUpdate
podManagementPolicy: Parallel
template:
metadata:
labels:
app: zk # has to match .spec.selector.matchLabels
spec:
containers:
- name: zk
imagePullPolicy: Always
image: chaotingge/zookeeper:kubernetes-zookeeper1.0-3.4.10
resources:
requests:
memory: "500Mi"
cpu: "0.5"
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: server
- containerPort: 3888
name: leader-election
command:
- sh
- -c
- "start-zookeeper \
--servers=3 \
--data_dir=/var/lib/zookeeper/data \
--data_log_dir=/var/lib/zookeeper/data/log \
--conf_dir=/opt/zookeeper/conf \
--client_port=2181 \
--election_port=3888 \
--server_port=2888 \
--tick_time=2000 \
--init_limit=10 \
--sync_limit=5 \
--heap=512M \
--max_client_cnxns=60 \
--snap_retain_count=3 \
--purge_interval=12 \
--max_session_timeout=40000 \
--min_session_timeout=4000 \
--log_level=INFO"
readinessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds: 10
timeoutSeconds: 5
livenessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds: 10
timeoutSeconds: 5
volumeMounts:
- name: datadir
mountPath: /var/lib/zookeeper
volumeClaimTemplates:
- metadata:
name: datadir
annotations:
volume.beta.kubernetes.io/storage-class: "anything"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
使用kubectl apply -f zk.yaml部署
备注:如果报错如下内容,需要更改yaml文件
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
[root@master zk01]# kubectl version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.4", GitCommit:"b695d79d4f967c403a96986f1750a35eb75e75f1", GitTreeState:"clean", BuildDate:"2021-11-17T15:48:33Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.0", GitCommit:"c2b5237ccd9c0f1d600d3072634ca66cefdf272f", GitTreeState:"clean", BuildDate:"2021-08-04T17:57:25Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"}
这种版本需要使用 policy/v1
示例:
[root@master chaitc-zookeeper]# vim zk.yaml
[root@master chaitc-zookeeper]# kubectl apply -f zk.yaml
service/zk-hs created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/zk-pdb created
statefulset.apps/zk created
The Service "zk-cs" is invalid: spec.ports[0].nodePort: Invalid value: 21811: provided port is not in the valid range. The range of valid ports is 30000-32767
[root@master chaitc-zookeeper]# vim zk.yaml
[root@master chaitc-zookeeper]# kubectl apply -f zk.yaml
service/zk-hs unchanged
service/zk-cs created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/zk-pdb configured
statefulset.apps/zk configured
[root@master chaitc-zookeeper]# kubectl get pods -n tools
NAME READY STATUS RESTARTS AGE
zk-0 0/1 Pending 0 82s
zk-1 0/1 Pending 0 82s
zk-2 0/1 Pending 0 82s
[root@master chaitc-zookeeper]# kubectl get pods -n tools
NAME READY STATUS RESTARTS AGE
zk-0 0/1 Pending 0 86s
zk-1 0/1 Pending 0 86s
zk-2 0/1 Pending 0 86s
[root@master chaitc-zookeeper]# kubectl describe pod zk-0 -n tools
Name: zk-0
Namespace: tools
Priority: 0
Node: <none>
Labels: app=zk
controller-revision-hash=zk-78bbbb488c
statefulset.kubernetes.io/pod-name=zk-0
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: StatefulSet/zk
Containers:
zk:
Image: leolee32/kubernetes-library:kubernetes-zookeeper1.0-3.4.10
Ports: 2181/TCP, 2888/TCP, 3888/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Command:
sh
-c
start-zookeeper --servers=3 --data_dir=/var/lib/zookeeper/data --data_log_dir=/var/lib/zookeeper/data/log --conf_dir=/opt/zookeeper/conf --client_port=2181 --election_port=3888 --server_port=2888 --tick_time=2000 --init_limit=10 --sync_limit=5 --heap=512M --max_client_cnxns=60 --snap_retain_count=3 --purge_interval=12 --max_session_timeout=40000 --min_session_timeout=4000 --log_level=INFO
Requests:
cpu: 500m
memory: 500Mi
Liveness: exec [sh -c zookeeper-ready 2181] delay=10s timeout=5s period=10s #success=1 #failure=3
Readiness: exec [sh -c zookeeper-ready 2181] delay=10s timeout=5s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/var/lib/zookeeper from datadir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tzp6v (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
datadir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: datadir-zk-0
ReadOnly: false
kube-api-access-tzp6v:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 108s default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
Warning FailedScheduling 107s default-scheduler 0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
Warning FailedScheduling 30s default-scheduler 0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
由于笔者是单节点部署的,master上有打的污点标记,如下:
[root@master chaitc-zookeeper]# kubectl get no -o yaml | grep taint -A 5
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
status:
addresses:
- address: 172.24.40.43
[root@master chaitc-zookeeper]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master.chaitc.xyz Ready control-plane,master 132d v1.22.4
使用命令:kubectl describe node <具体node>
里面有一行:
Taints: node-role.kubernetes.io/master:NoSchedule
[root@master chaitc-zookeeper]# kubectl taint node master.chaitc.xyz node-role.kubernetes.io/master-
node/master.chaitc.xyz untainted
[root@master chaitc-zookeeper]# kubectl describe node master.chaitc.xyz
里面有一行:
Taints: <none>
查看pod就可以看到:
[root@master chaitc-zookeeper]# kubectl get po -n tools
NAME READY STATUS RESTARTS AGE
zk-0 1/1 Running 0 15m
zk-1 1/1 Running 0 15m
zk-2 0/1 Pending 0 15m
查看pending的详情:
[root@master chaitc-zookeeper]# kubectl describe po zk-2 -n tools
Name: zk-2
Namespace: tools
Priority: 0
Node: <none>
Labels: app=zk
controller-revision-hash=zk-78bbbb488c
statefulset.kubernetes.io/pod-name=zk-2
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: StatefulSet/zk
Containers:
zk:
Image: leolee32/kubernetes-library:kubernetes-zookeeper1.0-3.4.10
Ports: 2181/TCP, 2888/TCP, 3888/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Command:
sh
-c
start-zookeeper --servers=3 --data_dir=/var/lib/zookeeper/data --data_log_dir=/var/lib/zookeeper/data/log --conf_dir=/opt/zookeeper/conf --client_port=2181 --election_port=3888 --server_port=2888 --tick_time=2000 --init_limit=10 --sync_limit=5 --heap=512M --max_client_cnxns=60 --snap_retain_count=3 --purge_interval=12 --max_session_timeout=40000 --min_session_timeout=4000 --log_level=INFO
Requests:
cpu: 500m
memory: 500Mi
Liveness: exec [sh -c zookeeper-ready 2181] delay=10s timeout=5s period=10s #success=1 #failure=3
Readiness: exec [sh -c zookeeper-ready 2181] delay=10s timeout=5s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/var/lib/zookeeper from datadir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l9bbx (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
datadir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: datadir-zk-2
ReadOnly: false
kube-api-access-l9bbx:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 16m default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
Warning FailedScheduling 16m default-scheduler 0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
Warning FailedScheduling 15m default-scheduler 0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.
Warning FailedScheduling 38s default-scheduler 0/1 nodes are available: 1 Insufficient cpu.
原因:节点性能不足:
1 Insufficient cpu, 1 node(s) had taints that the pod didn't tolerate.
节点性能不足了,增加节点试试
验证:
[root@master chaitc-zookeeper]# kubect get svc -n tool
-bash: kubect: command not found
[root@master chaitc-zookeeper]# kubectl get svc -n tools
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
zk-cs NodePort 10.108.177.133 <none> 2181:31811/TCP 18m
zk-hs ClusterIP None <none> 2888/TCP,3888/TCP 18m
[root@master chaitc-zookeeper]# ss -tan | grep 31811
LISTEN 0 128 *:31811 *:*
[root@master chaitc-zookeeper]# kubectl get svc -n tools
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
zk-cs NodePort 10.108.177.133 <none> 2181:31811/TCP 18m
zk-hs ClusterIP None <none> 2888/TCP,3888/TCP 19m
[root@master chaitc-zookeeper]# kubectl exec -it zk-1 -n tools -- /bin/sh
# ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 14:50 ? 00:00:00 sh -c start-zookeeper --servers=3 --data_dir=/var/lib/zookeeper/data --data_log_dir=/var/lib/zookeeper/data/l
root 6 1 0 14:50 ? 00:00:00 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -Dzookeeper.log.dir=/var/log/zookeeper -Dzookeeper.root.logger=INF
root 572 0 0 14:54 pts/0 00:00:00 /bin/sh
root 597 572 0 14:54 pts/0 00:00:00 ps -ef
# env
ZK_CS_PORT_2181_TCP_PORT=2181
ZK_CS_SERVICE_HOST=10.108.177.133
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.96.0.1:443
ZK_CS_PORT_2181_TCP_PROTO=tcp
HOSTNAME=zk-1
ZK_CS_SERVICE_PORT_CLIENT=2181
HOME=/root
ZK_CS_SERVICE_PORT=2181
ZK_CS_PORT=tcp://10.108.177.133:2181
ZK_DATA_LOG_DIR=/var/lib/zookeeper/log
ZK_CS_PORT_2181_TCP=tcp://10.108.177.133:2181
TERM=xterm
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
ZK_LOG_DIR=/var/log/zookeeper
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
ZK_USER=zookeeper
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_SERVICE_HOST=10.96.0.1
JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
PWD=/
ZK_CS_PORT_2181_TCP_ADDR=10.108.177.133
ZK_DATA_DIR=/var/lib/zookeeper/data
# cd /usr/bin
查看zk状态:
# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/bin/../etc/zookeeper/zoo.cfg
Mode: leader
# exit
[root@master chaitc-zookeeper]#
查看3台zk状态:
# 查看所有zk节点的状态
for i in 0 1 2; do kubectl exec zk-$i -n tools /usr/bin/zkServer.sh status; done
PodDisruptionBudget
k8s可以为每个应用程序创建 PodDisruptionBudget 对象(PDB)。PDB 将限制在同一时间因自愿干扰导致的复制应用程序中宕机的 pod 数量。
可以通过两个参数来配置PodDisruptionBudget:(需要注意的是,MinAvailable参数和MaxUnavailable参数只能同时配置一个。)
MinAvailable:表示最小可用POD数,表示应用POD集群处于运行状态的最小POD数量,或者是运行状态的POD数同总POD数的最小百分比
MaxUnavailable:表示最大不可用PO数,表示应用POD集群处于不可用状态的最大POD数,或者是不可用状态的POD数同总POD数的最大百分比
卸载zk:
kubectl delete StatefulSet zk -n tools
kubectl delete PodDisruptionBudget zk-pdb -n tools
kubectl delete svc zk-cs -n tools
kubectl delete svc zk-hs -n tools
kubectl delete pvc datadir-zk-0 -n tools
kubectl delete pvc datadir-zk-1 -n tools
kubectl delete pvc datadir-zk-2 -n tools
kubectl delete pv k8s-pv-zk01
kubectl delete pv k8s-pv-zk02
kubectl delete pv k8s-pv-zk03
卸载kafka:
kubectl delete StatefulSet kafka -n tools
kubectl delete PodDisruptionBudget kafka-pdb -n tools
kubectl delete Service kafka-cs -n tools
kubectl delete Service kafka-hs -n tools
kubectl delete pvc datadir-kafka-0 -n tools
kubectl delete pv k8s-pv-kafka01
kubectl delete pv k8s-pv-kafka02
kubectl delete pv k8s-pv-kafka03
我正在学习如何使用Nokogiri,根据这段代码我遇到了一些问题:require'rubygems'require'mechanize'post_agent=WWW::Mechanize.newpost_page=post_agent.get('http://www.vbulletin.org/forum/showthread.php?t=230708')puts"\nabsolutepathwithtbodygivesnil"putspost_page.parser.xpath('/html/body/div/div/div/div/div/table/tbody/tr/td/div
我有一个Ruby程序,它使用rubyzip压缩XML文件的目录树。gem。我的问题是文件开始变得很重,我想提高压缩级别,因为压缩时间不是问题。我在rubyzipdocumentation中找不到一种为创建的ZIP文件指定压缩级别的方法。有人知道如何更改此设置吗?是否有另一个允许指定压缩级别的Ruby库? 最佳答案 这是我通过查看rubyzip内部创建的代码。level=Zlib::BEST_COMPRESSIONZip::ZipOutputStream.open(zip_file)do|zip|Dir.glob("**/*")d
类classAprivatedeffooputs:fooendpublicdefbarputs:barendprivatedefzimputs:zimendprotecteddefdibputs:dibendendA的实例a=A.new测试a.foorescueputs:faila.barrescueputs:faila.zimrescueputs:faila.dibrescueputs:faila.gazrescueputs:fail测试输出failbarfailfailfail.发送测试[:foo,:bar,:zim,:dib,:gaz].each{|m|a.send(m)resc
很好奇,就使用rubyonrails自动化单元测试而言,你们正在做什么?您是否创建了一个脚本来在cron中运行rake作业并将结果邮寄给您?git中的预提交Hook?只是手动调用?我完全理解测试,但想知道在错误发生之前捕获错误的最佳实践是什么。让我们理所当然地认为测试本身是完美无缺的,并且可以正常工作。下一步是什么以确保他们在正确的时间将可能有害的结果传达给您? 最佳答案 不确定您到底想听什么,但是有几个级别的自动代码库控制:在处理某项功能时,您可以使用类似autotest的内容获得关于哪些有效,哪些无效的即时反馈。要确保您的提
假设我做了一个模块如下:m=Module.newdoclassCendend三个问题:除了对m的引用之外,还有什么方法可以访问C和m中的其他内容?我可以在创建匿名模块后为其命名吗(就像我输入“module...”一样)?如何在使用完匿名模块后将其删除,使其定义的常量不再存在? 最佳答案 三个答案:是的,使用ObjectSpace.此代码使c引用你的类(class)C不引用m:c=nilObjectSpace.each_object{|obj|c=objif(Class===objandobj.name=~/::C$/)}当然这取决于
我正在尝试使用ruby和Savon来使用网络服务。测试服务为http://www.webservicex.net/WS/WSDetails.aspx?WSID=9&CATID=2require'rubygems'require'savon'client=Savon::Client.new"http://www.webservicex.net/stockquote.asmx?WSDL"client.get_quotedo|soap|soap.body={:symbol=>"AAPL"}end返回SOAP异常。检查soap信封,在我看来soap请求没有正确的命名空间。任何人都可以建议我
关闭。这个问题是opinion-based.它目前不接受答案。想要改进这个问题?更新问题,以便editingthispost可以用事实和引用来回答它.关闭4年前。Improvethisquestion我想在固定时间创建一系列低音和高音调的哔哔声。例如:在150毫秒时发出高音调的蜂鸣声在151毫秒时发出低音调的蜂鸣声200毫秒时发出低音调的蜂鸣声250毫秒的高音调蜂鸣声有没有办法在Ruby或Python中做到这一点?我真的不在乎输出编码是什么(.wav、.mp3、.ogg等等),但我确实想创建一个输出文件。
我在我的项目目录中完成了compasscreate.和compassinitrails。几个问题:我已将我的.sass文件放在public/stylesheets中。这是放置它们的正确位置吗?当我运行compasswatch时,它不会自动编译这些.sass文件。我必须手动指定文件:compasswatchpublic/stylesheets/myfile.sass等。如何让它自动运行?文件ie.css、print.css和screen.css已放在stylesheets/compiled。如何在编译后不让它们重新出现的情况下删除它们?我自己编译的.sass文件编译成compiled/t
我想将html转换为纯文本。不过,我不想只删除标签,我想智能地保留尽可能多的格式。为插入换行符标签,检测段落并格式化它们等。输入非常简单,通常是格式良好的html(不是整个文档,只是一堆内容,通常没有anchor或图像)。我可以将几个正则表达式放在一起,让我达到80%,但我认为可能有一些现有的解决方案更智能。 最佳答案 首先,不要尝试为此使用正则表达式。很有可能你会想出一个脆弱/脆弱的解决方案,它会随着HTML的变化而崩溃,或者很难管理和维护。您可以使用Nokogiri快速解析HTML并提取文本:require'nokogiri'h
我想为Heroku构建一个Rails3应用程序。他们使用Postgres作为他们的数据库,所以我通过MacPorts安装了postgres9.0。现在我需要一个postgresgem并且共识是出于性能原因你想要pggem。但是我对我得到的错误感到非常困惑当我尝试在rvm下通过geminstall安装pg时。我已经非常明确地指定了所有postgres目录的位置可以找到但仍然无法完成安装:$envARCHFLAGS='-archx86_64'geminstallpg--\--with-pg-config=/opt/local/var/db/postgresql90/defaultdb/po