Finology 大数据金融

通过大数据以量化金融

k8s集群三个节点,一个master node, 两个worker node。

我们可以看到prometheus以NodePort方式把服务暴露出来。

1
2
3
kubectl get svc -n monitoring
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
prometheus-k8s NodePort 10.105.212.163 <none> 9090:30090/TCP 45d

我们通过<worker-node-ip>:30090的方式,可以正确访问到页面,但通过<master-node-ip>:30090,却不能正常访问页面。

随即,我查看了calico网络插件的运行情况。

1
2
3
4
5
kubectl get po  -n kube-system
NAME READY STATUS RESTARTS AGE
calico-node-jt4fz 0/1 Running 4 45d
calico-node-mv4ht 1/1 Running 5 45d
calico-node-nqbkl 1/1 Running 5 45d

其中一个pod,并未通过健康检查。

查看一下这个pod的详情信息。

1
2
3
4
5
6
7
8
kubectl describe po calico-node-jt4fz -n kube-system
Name: calico-node-jt4fz
Namespace: kube-system
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 6m22s (x499 over 89m) kubelet, k8s-master (combined from similar events): Readiness probe failed: calico/node is not ready: BIRD is not ready: BGP not established with 172.16.64.232,172.16.64.2352020-02-12 12:36:36.591 [INFO][12943] health.go 156: Number of node(s) with BGP peering established = 0

错误的原因是:

Readiness probe failed: calico/node is not ready: BIRD is not ready: BGP not established with 172.16.64.232,172.16.64.2352020-02-12 12:36:36.591 [INFO][12943] health.go 156: Number of node(s) with BGP peering established = 0

解决方案:

调整calico网络插件的网卡发现机制,修改IP_AUTODETECTION_METHOD对应的value值。

官方提供的yaml文件中,ip识别策略(IPDETECTMETHOD)没有配置,即默认为first-found,这会导致一个网络异常的ip作为nodeIP被注册,从而影响node-to-node mesh。

我们可以修改成can-reach或者interface的策略,尝试连接某一个Ready的node的IP,以此选择出正确的IP。

calico.yaml文件中添加如下两行内容

1
2
- name: IP_AUTODETECTION_METHOD
value: "interface=ens.*" # ens 根据实际网卡开头配置

配置如下:

1
2
3
4
5
6
7
8
9
10
11
12
# Cluster type to identify the deployment type
- name: CLUSTER_TYPE
value: "k8s,bgp"
# Specify interface
- name: IP_AUTODETECTION_METHOD
value: "interface=ens.*"
# Auto-detect the BGP IP address.
- name: IP
value: "autodetect"
# Enable IPIP
- name: CALICO_IPV4POOL_IPIP
value: "Always"

修改后重新应用一下calico.yaml文件。

1
kubectl apply -f calico.yaml

我们发现calico所有pod都已经成功启动了。

1
2
3
4
5
kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
calico-node-jtvh8 1/1 Running 0 15s
calico-node-k6m8t 1/1 Running 0 45s
calico-node-rb9qx 1/1 Running 0 29s

此时,通过<master-node-ip>:30090,即可以成功访问到页面了。

这个实验一定要在Linux环境下做,docker for mac, docker for win是不行的。

docker网络类型

docker会给我们创建三种网络类型:bridge, host, none

1
2
3
4
5
docker network ls
NETWORK ID NAME DRIVER SCOPE
a1681b4a3bc9 bridge bridge local
d724eb42948a host host local
c9381cce7bbb none null local

在没有指定相关网络的情况下,默认情况,会使用bridge网络模式。

网络验证

启动两个busybox容器。

1
2
3
4
5
docker run -dit --name busybox1 busybox
4f3c61775b5e8bcd38a0c97ff97bcd16ed717ab31bea417b198192f83b493846

docker run -dit --name busybox2 busybox
c523392d8949b53abfbe736c43d2d47ea60a3420e8931bebc69d05baff93889b

查看一下bridge网络

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
docker network inspect bridge
[
{
"Name": "bridge",
"Id": "a1681b4a3bc9bf973e2ff712677e373b663cc65b3a9dd6e868f5635fff295a6a",
"Created": "2020-02-05T16:24:09.290786154+08:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"4f3c61775b5e8bcd38a0c97ff97bcd16ed717ab31bea417b198192f83b493846": {
"Name": "busybox1",
"EndpointID": "bb138601e4d4018e8f36b4679cc22823fca3020d53fb7c372e47d1c73345b374",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
},
"c523392d8949b53abfbe736c43d2d47ea60a3420e8931bebc69d05baff93889b": {
"Name": "busybox2",
"EndpointID": "8f3d2661b8bd9fdf4f9bf2d763e36ab87a7e1fa8dca6e1e97b727c7a8ab2d0fe",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]

可以看到busybox1和busybox2两个容器都连接到了bridge网络。两个容器的ip地址也能看到。

进入busybox1容器

1
docker exec -it busybox1 sh

通过ip地址和容器名ping容器

1
2
3
4
5
6
7
/ # ping 172.17.0.3
PING 172.17.0.3 (172.17.0.3): 56 data bytes
64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.285 ms
64 bytes from 172.17.0.3: seq=1 ttl=64 time=0.130 ms

/ # ping busybox2
^C

可以得出,通过ip地址是可以ping通的,但通过容器名是不行的。

查看hosts信息,也找不到任何busybox2的信息

1
2
3
4
5
6
7
8
cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2 4f3c61775b5e

新建bridge网络

1
2
docker network create --driver bridge busybox_bridge
bebfcba064b2afdfb265d4df19eaccd950a1d957dcb3e2256d05dc544b64369e

查看网络

1
2
3
4
5
6
docker network ls
NETWORK ID NAME DRIVER SCOPE
a1681b4a3bc9 bridge bridge local
bebfcba064b2 busybox_bridge bridge local
d724eb42948a host host local
c9381cce7bbb none null local

分别创建busybox3和busybox4容器,并加入到busybox_bridge网络中

1
2
3
4
5
docker run -dit --network busybox_bridge --name busybox3 busybox
8e39a57b7543288fb716f66eaa4a54609a571c7d30194ca456d4a1dc443f19e6

docker run -dit --network busybox_bridge --name busybox4 busybox
ed1715cfaf168f64351d46aaa7f393cae00962c7ef850d8824abe06a4906843c

进入busybox3

本以为可以PING得通busybox4,但貌似不行,此实验失败。后面再研究。

host模式

以host模式启动一个nginx容器。

1
2
docker run --rm -d --net host nginx
956b96f5d0c25835a1d4470c12ed5e4abda341e44f9344e72cf42864976d96fa

因为是host模式,容器和主机共享一样的网络。

1
2
3
4
5
6
7
8
9
curl localhost
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
<p><em>Thank you for using nginx.</em></p>
</body>
</html>

docker镜像仓库harbor搭建中讲解了如何安装harbor,这一篇文章主要讲如何使用harbor。

创建用户

输入用户信息,创建用于push image的用户。

创建项目

输入项目信息。

进入项目,在成员标签下,把刚才创建的user用户以开发人员的角色,添加到此项目下。

推入镜像

我们先下载一个nginx的镜像

1
docker pull nginx:1.16.1

查看镜像

1
2
3
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx 1.16.1 55c440ba1ecb 3 days ago 127MB

修改tag

1
docker tag 55c440ba1ecb 172.16.64.233/my-project/nginx:1.16.1

推镜像

1
2
3
docker push 172.16.64.233/my-project/nginx:1.16.1
The push refers to repository [172.16.64.233/my-project/nginx]
Get https://172.16.64.233/v2/: dial tcp 172.16.64.233:443: connect: connection refused

所以需要加入insecure-registries配置

1
vi /etc/docker/daemon.json

添加如下内容:

1
2
3
{
"insecure-registries": ["172.16.64.233"]
}

重启docker

1
systemctl restart docker

再次推送

1
2
3
4
5
6
docker push 172.16.64.233/my-project/nginx:1.16.1
The push refers to repository [172.16.64.233/my-project/nginx]
37ec257a56ed: Preparing
567538016328: Preparing
488dfecc21b1: Preparing
denied: requested access to the resource is denied

出现权限不够的错误。

所以需要登录。之前我们注册了一个user用户。

1
2
3
4
5
6
7
8
docker login 172.16.64.233
Username: user
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

再次push

1
2
3
4
5
6
docker push 172.16.64.233/my-project/nginx:1.16.1
The push refers to repository [172.16.64.233/my-project/nginx]
37ec257a56ed: Pushed
567538016328: Pushed
488dfecc21b1: Pushed
1.16.1: digest: sha256:5f281748501a5ad9f5d657fc6067ac6187d62be2a811c460deee1504cabddc51 size: 948

成功。

在harbor页面里,也可以看到此镜像了。

拉取镜像

我们可以换一台机器,拉取镜像。

1
2
docker pull 172.16.64.233/my-project/nginx:1.16.1
Error response from daemon: Get https://172.16.64.233/v2/: dial tcp 172.16.64.233:443: connect: connection refused

和前面处理这个问题的方式一样,在/etc/docker/daemon.json加入如下内容。

1
2
3
{
"insecure-registries": ["172.16.64.233"]
}

并重启docker

1
systemctl restart docker

再次拉取镜像

1
2
3
4
5
6
7
docker pull 172.16.64.233/my-project/nginx:1.16.1
1.16.1: Pulling from my-project/nginx
bc51dd8edc1b: Pull complete
60041be5685b: Pull complete
5ad6baa9b36b: Pull complete
Digest: sha256:5f281748501a5ad9f5d657fc6067ac6187d62be2a811c460deee1504cabddc51
Status: Downloaded newer image for 172.16.64.233/my-project/nginx:1.16.1

成功拉取。

高可用设置

如果需要搭几个harbor节点,则可以到仓库管理同步管理标签中设置同步规则。

然后再在harbor的前端,通过nginx做一个负载均衡即可。

0%