

fluxctl 命令工具,到Github仓库上下载可执行文件即可。然后将其放到 /usr/bin 目录下,并赋予可执行权限:
[root@m1 /usr/local/src]# mv fluxctl_linux_amd64 /usr/bin/fluxctl
[root@m1 ~]# chmod a+x /usr/bin/fluxctl
[root@m1 ~]# fluxctl version
1.21.0
[root@m1 ~]#
给 Flux 创建一个命名空间,然后将 Flux Operator 部署到k8s集群:
[root@m1 ~]# kubectl create ns flux
namespace/flux created
[root@m1 ~]# git clone https://github.com/fluxcd/flux.git
[root@m1 ~]# cd flux/
在部署 Flux 之前,需要先修改几个Git相关的配置,修改为你Git仓库的用户名、邮箱、url等:
[root@m1 ~/flux]# vim deploy/flux-deployment.yaml # 修改如下几个配置项
...
# Replace the following URL to change the Git repository used by Flux.
# HTTP basic auth credentials can be supplied using environment variables:
# https://$(GIT_AUTHUSER):$(GIT_AUTHKEY)@github.com/user/repository.git
- --git-url=git@github.com:fluxcd/flux-get-started
- --git-branch=master
# Include this if you want to restrict the manifests considered by flux
# to those under the following relative paths in the git repository
# - --git-path=subdir1,subdir2
- --git-label=flux-sync
- --git-user=Flux automation
- --git-email=flux@example.com
修改完成后,进行部署:
[root@m1 ~/flux]# kubectl apply -f deploy
[root@m1 ~/flux]# kubectl get all -n flux
NAME READY STATUS RESTARTS AGE
pod/flux-65479fb87-k5zxb 1/1 Running 0 7m20s
pod/memcached-c86cd995d-5gl5p 1/1 Running 0 44m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/memcached ClusterIP 10.106.229.44 <none> 11211/TCP 44m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/flux 1/1 1 1 44m
deployment.apps/memcached 1/1 1 1 44m
NAME DESIRED CURRENT READY AGE
replicaset.apps/flux-65479fb87 1 1 1 7m20s
replicaset.apps/memcached-c86cd995d 1 1 1 44m
[root@m1 ~]#
除了以上方式,也可以使用命令行部署 Flux:
fluxctl install \
--git-user=xxx \
--git-email=xxx@xxx \
--git-url=git@github.com:xxx/smdemo \
--namespace=flux | kubectl apply -f -
由于使用的是私有仓库,我们还需要一些额外的操作,需要将其主机密钥添加到Flux daemon容器中的 ~/.ssh/known_hosts 文件中。具体步骤如下:
[root@m1 ~]# kubectl exec -n flux flux-65479fb87-k5zxb -ti -- \
env GITHOST="gitee.com" GITREPO="git@gitee.com:demo_focus/service-mesh-demo.git" PS1="container$ " /bin/sh
container$ ssh-keyscan $GITHOST >> ~/.ssh/known_hosts # 添加host key
container$ git clone $GITREPO # 测试确保能正常对仓库进行克隆
Cloning into 'service-mesh-demo'...
remote: Enumerating objects: 10, done.
remote: Counting objects: 100% (10/10), done.
remote: Compressing objects: 100% (10/10), done.
remote: Total 10 (delta 2), reused 0 (delta 0), pack-reused 0
Receiving objects: 100% (10/10), done.
Resolving deltas: 100% (2/2), done.
container$
完成 Flux 的部署后,我们需要将 Flux 生成的 deploy key 添加到 git 仓库中(read/write 权限),获取 deploy key 的命令如下:
[root@m1 ~]# fluxctl identity --k8s-fwd-ns flux
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDsyfN+x4jen+Ikpff8LszXLFTwXSQviFxCrIx7uMy7LJM5uUEsDdFs/DZL1g9h/YnkfLJlFrxOCJ+tuqPrXuj3ceEFfal4T3YWiDwf1RsGJvJd6ED5APjsxyu5gkj9LvkOB8OlYwPlS8Pygv997n93gtH7rFbocK5EQpbhhBlue3Or2ufI/KBxDCx6xLaH9U/16EEi+BDVSsCetGIQI+TSRqqpN30+Y8paS6iCYajKTubKv7x44WaVFgSDT9Y/OycUq1LupJoVoD8/5Y2leUMaF9dhMbQgoc8zjh8q2HF2n97mAvgYWJosjeIcAKS82C0zPlPupPevNedAhhEb82svPWh7BI4N4XziA06ypAEmfEz3JuUTTeABpF2hEoV4UEagkSyS8T3xhfdjigVcKiBW5AqRsRyx+ffW4WREHjARSC8CKl0Oj00a9FOGoNsDKkFuTbJePMcGdgvjs61UlgUUjdQFfHoZz2UVo2OEynnCpY7hj5SrEudkujRon4HEhJE= root@flux-7f5f7776df-l65lx
[root@m1 ~]#
复制密钥内容,到Git仓库上进行添加:
istio-injection=enabled 标签,让 Istio 可以注入代理:
[root@m1 ~]# kubectl create ns demo
namespace/demo created
[root@m1 ~]# kubectl label namespace demo istio-injection=enabled
namespace/demo labeled
[root@m1 ~]#
将Git仓库克隆到本地,在仓库下创建 config 目录:
[root@m1 ~]# git clone git@gitee.com:demo_focus/service-mesh-demo.git
[root@m1 ~]# cd service-mesh-demo/
[root@m1 ~/service-mesh-demo]# mkdir config
在该目录下创建服务的配置文件:
[root@m1 ~/service-mesh-demo]# vim config/httpbin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: httpbin
namespace: demo
---
apiVersion: v1
kind: Service
metadata:
name: httpbin
namespace: demo
labels:
app: httpbin
spec:
ports:
- name: http
port: 8000
targetPort: 80
selector:
app: httpbin
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: httpbin
namespace: demo
spec:
replicas: 1
selector:
matchLabels:
app: httpbin
version: v1
template:
metadata:
labels:
app: httpbin
version: v1
spec:
serviceAccountName: httpbin
containers:
- image: docker.io/kennethreitz/httpbin
imagePullPolicy: IfNotPresent
name: httpbin
ports:
- containerPort: 80
[root@m1 ~/service-mesh-demo]# vim config/sleep.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: sleep
namespace: demo
---
apiVersion: v1
kind: Service
metadata:
name: sleep
namespace: demo
labels:
app: sleep
spec:
ports:
- port: 80
name: http
selector:
app: sleep
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: sleep
namespace: demo
spec:
replicas: 1
selector:
matchLabels:
app: sleep
template:
metadata:
labels:
app: sleep
spec:
serviceAccountName: sleep
containers:
- name: sleep
image: governmentpaas/curl-ssl
command: ["/bin/sleep", "3650d"]
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /etc/sleep/tls
name: secret-volume
volumes:
- name: secret-volume
secret:
secretName: sleep-secret
optional: true
将配置文件提交到远程仓库,更新 git repo:
[root@m1 ~/service-mesh-demo]# git add .
[root@m1 ~/service-mesh-demo]# git commit -m "commit yaml"
[root@m1 ~/service-mesh-demo]# git push origin master
执行如下命令,让 Flux 去同步仓库的变更,并进行自动部署:
[root@m1 ~]# fluxctl sync --k8s-fwd-ns flux
Synchronizing with ssh://git@gitee.com/demo_focus/service-mesh-demo
Revision of master to apply is 49bc37e
Waiting for 49bc37e to be applied ...
Done.
[root@m1 ~]#
[root@m1 ~]# kubectl get all -n demo
NAME READY STATUS RESTARTS AGE
pod/httpbin-74fb669cc6-v9lc5 2/2 Running 0 36s
pod/sleep-854565cb79-mcmnb 2/2 Running 0 40s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/httpbin ClusterIP 10.105.17.57 <none> 8000/TCP 36s
service/sleep ClusterIP 10.103.14.114 <none> 80/TCP 40s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/httpbin 1/1 1 1 36s
deployment.apps/sleep 1/1 1 1 40s
NAME DESIRED CURRENT READY AGE
replicaset.apps/httpbin-74fb669cc6 1 1 1 36s
replicaset.apps/sleep-854565cb79 1 1 1 40s
[root@m1 ~]#
测试服务之间的连通性是否正常:
[root@m1 ~]# kubectl exec -it -n demo sleep-854565cb79-mcmnb -c sleep -- curl http://httpbin.demo:8000/ip
{
"origin": "127.0.0.1"
}
[root@m1 ~]#

[root@m1 ~]# helm repo add flagger https://flagger.app
"flagger" has been added to your repositories
[root@m1 ~]#
创建 Flagger 的 crd:
[root@m1 ~]# kubectl apply -f https://raw.githubusercontent.com/fluxcd/flagger/main/artifacts/flagger/crd.yaml
[root@m1 ~]# kubectl get crd |grep flagger
alertproviders.flagger.app 2020-12-23T14:40:00Z
canaries.flagger.app 2020-12-23T14:40:00Z
metrictemplates.flagger.app 2020-12-23T14:40:00Z
[root@m1 ~]#
通过 Helm 把 Flagger 部署到 istio-system 命名空间下:
[root@m1 ~]# helm upgrade -i flagger flagger/flagger \
--namespace=istio-system \
--set crd.create=false \
--set meshProvider=istio \
--set metricsServer=http://prometheus.istio-system:9090
添加一个slack的hooks到flagger里,可以让flagger发送通知到slack频道里,这一步是可选的:
[root@m1 ~]# helm upgrade -i flagger flagger/flagger \
--namespace=istio-system \
--set crd.create=false \
--set slack.url=https://hooks.slack.com/services/xxxxxx \
--set slack.channel=general \
--set slack.user=flagger
[root@m1 ~]# helm upgrade -i flagger-grafana flagger/grafana \
--namespace=istio-system \
--set url=http://prometheus.istio-system:9090 \
--set user=admin \
--set password=admin
以上操作完成后,确认下flagger的部署情况:
[root@m1 ~]# kubectl get pods -n istio-system
NAME READY STATUS RESTARTS AGE
flagger-b68b578b-5f8bh 1/1 Running 0 7m50s
flagger-grafana-77b8c8df65-7vv89 1/1 Running 0 71s
...
为网格创建一个ingress网关:
[root@m1 ~]# kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: public-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
EOF
另外,我们还可以部署一个负载测试工具,当然这也是可选的:
[root@m1 ~]# kubectl create ns test
namespace/test created
[root@m1 ~]# kubectl apply -k https://github.com/fluxcd/flagger/tree/main/kustomize/tester
[root@m1 ~]# kubectl get pods -n test
NAME READY STATUS RESTARTS AGE
flagger-loadtester-64695f854f-5hsmg 1/1 Running 0 114s
[root@m1 ~]#
如果上面这种方式比较慢的话也可以将仓库克隆下来,然后对 tester 进行部署:
[root@m1 ~]# cd /usr/local/src
[root@m1 /usr/local/src]# git clone https://github.com/fluxcd/flagger.git
[root@m1 /usr/local/src]# kubectl apply -k flagger/kustomize/tester/
[root@m1 ~]# kubectl apply -n demo -f - <<EOF
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: httpbin
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: httpbin
minReplicas: 2
maxReplicas: 4
metrics:
- type: Resource
resource:
name: cpu
# scale up if usage is above
# 99% of the requested CPU (100m)
targetAverageUtilization: 99
EOF
创建用于验证灰度发布的 metric ,falgger会根据该指标逐渐迁移流量:
[root@m1 ~]# kubectl apply -f - <<EOF
apiVersion: flagger.app/v1beta1
kind: MetricTemplate
metadata:
name: latency
namespace: istio-system
spec:
provider:
type: prometheus
address: http://prometheus.istio-system:9090
query: |
histogram_quantile(
0.99,
sum(
rate(
istio_request_duration_milliseconds_bucket{
reporter="destination",
destination_workload_namespace="{{ namespace }}",
destination_workload=~"{{ target }}"
}[{{ interval }}]
)
) by (le)
)
EOF
创建 flagger 的 canary,具体的配置内容如下,灰度发布的相关配置信息都定义在这里:
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: httpbin
namespace: demo
spec:
# deployment reference
targetRef:
apiVersion: apps/v1
kind: Deployment
name: httpbin
# the maximum time in seconds for the canary deployment
# to make progress before it is rollback (default 600s)
progressDeadlineSeconds: 60
# HPA reference (optional)
autoscalerRef:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
name: httpbin
service:
# service port number
port: 8000
# container port number or name (optional)
targetPort: 80
# Istio gateways (optional)
gateways:
- public-gateway.istio-system.svc.cluster.local
analysis:
# schedule interval (default 60s)
interval: 30s
# max number of failed metric checks before rollback
threshold: 5
# max traffic percentage routed to canary
# percentage (0-100)
maxWeight: 100
# canary increment step
# percentage (0-100)
stepWeight: 20
metrics:
- name: request-success-rate
# minimum req success rate (non 5xx responses)
# percentage (0-100)
thresholdRange:
min: 99
interval: 1m
- name: latency
templateRef:
name: latency
namespace: istio-system
# maximum req duration P99
# milliseconds
thresholdRange:
max: 500
interval: 30s
# testing (optional)
webhooks:
- name: load-test
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
cmd: "hey -z 1m -q 10 -c 2 http://httpbin-canary.demo:8000/headers"
创建了 Canary 后,会发现它在集群中自动为 httpbin 创建了一些带 primary 命名的资源,还会创建一个Virtual Service,其路由规则指向 httpbin-primary 和 httpbin-canary 服务:
[root@m1 ~]# kubectl get pods -n demo
NAME READY STATUS RESTARTS AGE
httpbin-74fb669cc6-6ztkg 2/2 Running 0 50s
httpbin-74fb669cc6-vfs4h 2/2 Running 0 38s
httpbin-primary-9cb49747-94s4z 2/2 Running 0 3m3s
httpbin-primary-9cb49747-xhpcg 2/2 Running 0 3m3s
sleep-854565cb79-mcmnb 2/2 Running 0 94m
[root@m1 ~]# kubectl get svc -n demo
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
httpbin ClusterIP 10.105.17.57 <none> 8000/TCP 86m
httpbin-canary ClusterIP 10.99.206.196 <none> 8000/TCP 3m14s
httpbin-primary ClusterIP 10.98.196.235 <none> 8000/TCP 3m14s
sleep ClusterIP 10.103.14.114 <none> 80/TCP 95m
[root@m1 ~]# kubectl get vs -n demo
NAME GATEWAYS HOSTS AGE
httpbin ["public-gateway.istio-system.svc.cluster.local"] ["httpbin"] 3m29s
[root@m1 ~]#
然后我们使用如下命令触发灰度:
[root@m1 ~]# kubectl -n demo set image deployment/httpbin httpbin=httpbin-v2
deployment.apps/httpbin image updated
[root@m1 ~]#
[root@m1 ~]# kubectl describe canary httpbin -n demo
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
...
Normal Synced 2m57s flagger New revision detected! Scaling up httpbin.demo
Warning Synced 27s (x5 over 2m27s) flagger canary deployment httpbin.demo not ready: waiting for rollout to finish: 1 out of 2 new replicas have been updated
此时查看 httpbin 的Virtual Service,会发现已经将20%的流量切换到灰度发布版本了:
[root@m1 ~]# kubectl describe vs httpbin -n demo
...
Spec:
Gateways:
public-gateway.istio-system.svc.cluster.local
Hosts:
httpbin
Http:
Route:
Destination:
Host: httpbin-primary
Weight: 80
Destination:
Host: httpbin-canary
Weight: 20
Events: <none>
然后进入 sleep 服务中,使用脚本循环访问 httpbin 服务:
[root@m1 ~]# kubectl exec -it -n demo sleep-854565cb79-mcmnb -c sleep -- sh
/ # while [ 1 ]; do curl http://httpbin.demo:8000/headers;sleep 2s; done
再次查看 httpbin 的Virtual Service,会发现已经将60%的流量切换到灰度发布版本了:
[root@m1 ~]# kubectl describe vs httpbin -n demo
...
Spec:
Gateways:
public-gateway.istio-system.svc.cluster.local
Hosts:
httpbin
Http:
Route:
Destination:
Host: httpbin-primary
Weight: 40
Destination:
Host: httpbin-canary
Weight: 60
Events: <none>
我们可以打开flagger的Grafana:
[root@m1 ~]# kubectl -n istio-system port-forward svc/flagger-grafana 3000:80 --address 192.168.243.138
Forwarding from 192.168.243.138:3000 -> 3000
内置了如下dashboard:
在 Istio Canary Dashboard 可以查看发布过程:
最终将100%的流量切换到灰度发布版本代表发布完成:
[root@m1 ~]# kubectl describe vs httpbin -n demo
...
Spec:
Gateways:
public-gateway.istio-system.svc.cluster.local
Hosts:
httpbin
Http:
Route:
Destination:
Host: httpbin-primary
Weight: 0
Destination:
Host: httpbin-canary
Weight: 100
Events: <none>
从 canary httpbin 的事件日志中也可以看到流量迁移的过程:
[root@m1 ~]# kubectl describe canary httpbin -n demo
...
Normal Synced 3m44s (x2 over 18m) flagger New revision detected! Restarting analysis for httpbin.demo
Normal Synced 3m14s (x2 over 18m) flagger Starting canary analysis for httpbin.demo
Normal Synced 3m14s (x2 over 18m) flagger Advance httpbin.demo canary weight 20
Warning Synced 2m44s (x2 over 17m) flagger Halt advancement no values found for istio metric request-success-rate probably httpbin.demo is not receiving traffic: running query failed: no values found
Normal Synced 2m14s flagger Advance httpbin.demo canary weight 40
Normal Synced 104s flagger Advance httpbin.demo canary weight 60
Normal Synced 74s flagger Advance httpbin.demo canary weight 80
Normal Synced 44s flagger Advance httpbin.demo canary weight 100
当发布完成后,canary httpbin 的状态就会变更为 Succeeded :
[root@m1 ~]# kubectl get canary -n demo
NAME STATUS WEIGHT LASTTRANSITIONTIME
httpbin Succeeded 0 2020-12-23T16:03:04Z
[root@m1 ~]#
常见的可用性级别如下:
[root@m1 ~]# kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: httpbin
namespace: demo
spec:
hosts:
- "*"
gateways:
- httpbin-gateway
http:
- route:
- destination:
host: httpbin
port:
number: 8000
EOF
添加第一个弹性能力:配置超时,配置如下所示:
[root@m1 ~]# kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: httpbin
namespace: demo
spec:
hosts:
- "*"
gateways:
- httpbin-gateway
http:
- route:
- destination:
host: httpbin
port:
number: 8000
timeout: 1s # 配置超时
EOF
超时配置规则:
min (timeout, retry.perTryTimout * retry.attempts)apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: httpbin
namespace: demo
spec:
hosts:
- "*"
gateways:
- httpbin-gateway
http:
- route:
- destination:
host: httpbin
port:
number: 8000
retry: # 配置重试策略
attempts: 3
perTryTimeout: 1s
timeout: 8s
重试配置项:
x-envoy-retry-on:5xx, gateway-error, reset, connect-failure…x-envoy-retry-grpc-on:cancelled, deadline-exceeded, internal, unavailable…apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: httpbin
namespace: demo
spec:
host: httpbin
trafficPolicy:
connectionPool:
tcp:
maxConnections: 1
http:
http1MaxPendingRequests: 1
maxRequestsPerConnection: 1
outlierDetection:
consecutiveErrors: 1
interval: 1s
baseEjectionTime: 3m
maxEjectionPercent: 100
熔断配置:
[root@m1 ~]# kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: httpbin
namespace: demo
spec:
selector:
matchLabels:
app: httpbin
EOF
以上配置的意思就是对于这个服务完全不可访问,我们可以测试一下:
# 请求被拒绝
$ kubectl exec -it -n demo ${sleep_pod_name} -c sleep -- curl "http://httpbin.demo:8000/get"
RBAC: access denied # 响应
# 其他版本可以正常访问
$ kubectl exec -it -n demo ${sleep_pod_name} -c sleep -- curl "http://httpbin-v2.demo:8000/get"
我们可以通过如下配置对请求来源进行限定,例如请求来源必须是 demo 这个命名空间:
[root@m1 ~]# kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: httpbin
namespace: demo
spec:
action: ALLOW
rules:
- from:
- source:
principals: ["cluster.local/ns/demo/sa/sleep"]
- source:
namespaces: ["demo"]
EOF
测试:
# 请求通过
$ kubectl exec -it -n demo ${sleep_pod_name} -c sleep -- curl "http://httpbin.demo:8000/get"
# 请求被拒绝
$ kubectl exec -it -n ${other_namespace} ${sleep_pod_name} -c sleep -- curl "http://httpbin.demo:8000/get"
# 修改service account为${other_namespace}后,通过
除了限定请求来源外,还可以限定只有特定的接口允许被访问:
[root@m1 ~]# kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: httpbin
namespace: demo
spec:
action: ALLOW
rules:
- from:
- source:
principals: ["cluster.local/ns/demo/sa/sleep"]
- source:
namespaces: ["demo"]
to:
- operation:
methods: ["GET"]
paths: ["/get"]
EOF
测试:
# 请求通过
$ kubectl exec -it -n demo ${sleep_pod_name} -c sleep -- curl "http://httpbin.demo:8000/get"
# 请求被拒绝
$ kubectl exec -it -n demo ${sleep_pod_name} -c sleep -- curl "http://httpbin.demo:8000/ip"
还可以配置其他特定条件,例如限定请求头,通常用于我们需要客户端携带特定的请求头才允许访问接口的场景:
[root@m1 ~]# kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: httpbin
namespace: demo
spec:
action: ALLOW
rules:
- from:
- source:
principals: ["cluster.local/ns/demo/sa/sleep"]
- source:
namespaces: ["demo"]
to:
- operation:
methods: ["GET"]
paths: ["/get"]
when:
- key: request.headers[x-rfma-token]
values: ["test*"]
EOF
测试:
# 请求不通过
$ kubectl exec -it -n demo ${sleep_pod_name} -c sleep -- curl "http://httpbin.demo:8000/get"
# 加token后通过
$ kubectl exec -it -n demo ${sleep_pod_name} -c sleep -- curl "http://httpbin.demo:8000/get" -H x-rfma-token:test1
@作者:SYFStrive @博客首页:HomePage📜:微信小程序📌:个人社区(欢迎大佬们加入)👉:社区链接🔗📌:觉得文章不错可以点点关注👉:专栏连接🔗💃:感谢支持,学累了可以先看小段由小胖给大家带来的街舞👉微信小程序(🔥)目录自定义组件-behaviors 1、什么是behaviors 2、behaviors的工作方式 3、创建behavior 4、导入并使用behavior 5、behavior中所有可用的节点 6、同名字段的覆盖和组合规则总结最后自定义组件-behaviors 1、什么是behaviorsbehaviors是小程序中,用于实现
最近在工作中,看到一些新手测试同学,对接口测试存在很多疑问,甚至包括一些从事软件测试3,5年的同学,在聊到接口时,也是一知半解;今天借着这个机会,对接口测试做个实战教学,顺便总结一下经验,分享给大家。计划拆分成4个模块跟大家做一个分享,(接口测试、接口基础知识、接口自动化、接口进阶)感兴趣的小伙伴记得关注,希望对你的日常工作和求职面试,带来一些帮助。注:文章较长有5000多字,希望小伙伴们认真看完,当然有些内容对小白同学不是太友好,如果你需要详细了解其中的一些概念或者名词,请在文章之后留言,后续我将针对大家的疑问,整理输出一些大家感兴趣的文章。随着开发模式的迭代更新,前后端分离已不是新的概念,
目录FIFO一.自定义同步FIFO1.1代码设计1.2Testbech1.3行为仿真***学习位宽计算函数$clog2()***$clog2()系统函数使用,可以不关注***分布式资源或者BLOCKBRAM二.异步FIFO2.1在FIFO判满的时候有两种方式:2.2异步FIFO为什么要使用格雷码2.2.1介绍格雷码2.2.2格雷码在异步FIFO中的应用2.2.2格雷码判满2.4二进制与格雷码之间的转换2.4.1二进制码转换为格雷码的方法2.4.2格雷码转换为二进制码的方法2.3实现框图2.5实现及仿真代码2.6仿真图验证2.7结论FIFO 这篇更多的是记录FIFO学习,参考了众多优秀的文章,
运行有问题或需要源码请点赞关注收藏后评论区留言一、利用ContentResolver读写联系人在实际开发中,普通App很少会开放数据接口给其他应用访问。内容组件能够派上用场的情况往往是App想要访问系统应用的通讯数据,比如查看联系人,短信,通话记录等等,以及对这些通讯数据及逆行增删改查。首先要给AndroidMaifest.xml中添加响应的权限配置 下面是往手机通讯录添加联系人信息的例子效果如下分成三个步骤先查出联系人的基本信息,然后查询联系人号码,再查询联系人邮箱代码 ContactAddActivity类packagecom.example.chapter07;importandroid
📝学技术、更要掌握学习的方法,一起学习,让进步发生👩🏻作者:一只IT攻城狮。💐学习建议:1、养成习惯,学习java的任何一个技术,都可以先去官网先看看,更准确、更专业。💐学习建议:2、然后记住每个技术最关键的特性(通常一句话或者几个字),从主线入手,由浅入深学习。❤️《SpringCloud入门实战系列》解锁SpringCloud主流组件入门应用及关键特性。带你了解SpringCloud主流组件,是如何一战解决微服务诸多难题的。项目demo:源码地址👉🏻SpringCloud入门实战系列不迷路👈🏻:SpringCloud入门实战(一)什么是SpringCloud?SpringCloud入门实战
快捷目录前言一、涉及到的相关技术简介二、具体实现过程及踩坑杂谈1.安卓手机改造成linux系统实现方案2.改造后的手机Linux中软件的安装3.手机Linux中安装MySQL5.7踩坑实录4.手机Linux中安装软件的正确方法三、Linux服务器部署前后端分离项目流程1.前提准备(安装必要软件,搭建环境):2.前后端分离项目的详细部署过程:总结前言总体概述:本篇文章隶属于“手机改造服务器部署前后端分离项目”系列专栏,该专栏将分多个板块,每个板块独立成篇来详细记录:手机(安卓)改造成个人服务器(Linux)、Linux中安装软件、配置开发环境、部署JAVA+VUE+MySQL5.7前后端分离项目
前言 Neo4j是一个高性能的,Nosql图形数据库。Nosql=nosql,即与传统的将数据结构化并存储在表中的数据库不一样。Neo4j将数据存储在网络上,我们也可以把Neo4j视为一个图引擎。我们打交道的是一个面对对象的、灵活的网络结构而不是严格的、静态的表。传统关系型数据库,当数据量很大时,查询性能会明显受影响,尤其是一度以上的查询。但是图形数据库却在这方面表现得很好。neo4j支持多种主流编程语言,包括.Net、Java、JavaScript、Python。本文主要是涉及到jdk和neo4j安装和适配。 注意事项:neo4j安装版本与JDK版本需要对应,不
🚗Es学习·第五站~🚩Es学习起始站:【微服务】Elasticsearch概述&环境搭建(一)🚩本文已收录至专栏:微服务探索之旅👍希望您能有所收获一.引入综合前几站所学,我们已经对Elasticsearch的使用有了一定的了解,接下来让我们一起通过一个综合实战案例来复习前几站所学内容,体会在实际生产中的作用。我们一起实现如下功能:酒店搜索和分页酒店结果过滤我周边的酒店酒店竞价排名数据聚合筛选选项搜索框自动补全酒店数据的同步二.环境搭建按照第一站的学习部署Elasticsearch并启动运行。按照第二站的学习中的如下步骤,初始化测试项目并在Es导入数据。使用Elasticsearch,肯定离不开
目录开发环境 数据描述功能需求数据准备数据清洗用户行为分析找出有价值的用户开发环境 Hadoop+Hive+Spark+HBase启动Hadoop:start-all.sh启动zookeeper:zkServer.shstart启动Hive:nohuphiveserver21>/dev/null2>&1&beeline-ujdbc:hive2://192.168.152.192:10000启动Hbase:start-hbase.shhbaseshell启动Spark:spark-shell数据描述数据描述UserBehavior是阿里巴巴提供的一个淘宝用户行为数据集。本数据集包含了2017-0
CSDN:如果用两个关键词来总结2022年云原生的发展态势,您会有哪些评价?——繁荣和普惠。“繁荣”代表当前云原生的技术和产品蓬勃发展;“普惠”代表云原生技术从互联网走向金融、零售、政企等行业,普惠千行百业构建丰富应用。(阿里云智能云原生应用平台产品负责人李国强)2022年,云原生技术日趋成熟,伴随容器、Serverless、微服务等技术快速发展,已逐步构建出繁荣的技术体系。如今云原生凭借降本增效、提高持续交付能力、易于开发等优势,正在不断激活应用构建范式,引起企业系统架构、生产方式、商业模式等发生变革,毋庸置疑,云原生已成为企业数字化转型的最短路径。那么,2022年云原生领域有哪些新进展?未