0%

想让自己的博客被百度搜索引擎尽快收录,并在搜索时排在前面一些,那就需要及时提交博客的URL。

主要有三种提交方法,大家最好三种都用,这也是官方推荐的,不会有任何冲突。

主动推送

主动推送是实时的,能让百度在第一时间知道你的原创内容。

首先要安装hexo-baidu-url-submit插件。

在博客根目录下执行如下命令:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
sudo npm install hexo-baidu-url-submit --save
+ hexo-baidu-url-submit@0.0.6
added 85 packages from 61 contributors, removed 71 packages and updated 11 packages in 14.783s


╭────────────────────────────────────────────────────────────────╮
│ │
│ New minor version of npm available! 6.10.0 → 6.13.7 │
│ Changelog: https://github.com/npm/cli/releases/tag/v6.13.7 │
│ Run npm install -g npm to update! │
│ │
╰────────────────────────────────────────────────────────────────╯

(base)

然后在根目录中的_config.yml文件内容最后,添加如下内容:

count为提交Url的数量。

host为博客的地址。

token这也可以在百度网站上找到。

1
2
3
4
5
6
# Baibu url submit
baidu_url_submit:
count: 500
host: https://finolo.gy
token: ## 填下图位置里面token参数的值
path: baidu_urls.txt

最后在Deploy标签下加入buidu_url_submitter

1
2
deploy:
- type: baidu_url_submitter

在hexo generate时,会产生.deploy_git/baidu_urls.txt文件。

hexo deploy时,会从上述文件中读取urls,提交到百度。

成功以后,一般会返回类似这样的信息。remain为今天剩余可推送urls数量,success为成功推送的urls数量。

1
2
{"remain":99602,"success":149}
INFO Deploy done: baidu_url_submitter

自动推送

themes/<your_theme>/_config.yaml中,把baidu_push标签的值设置为true

这时,系统会调用到themes/<your_theme>/layout/_third-party/seo/baidu-push.swig文件。

Sitemap

安装插件

1
2
npm install hexo-generator-sitemap --save     
npm install hexo-generator-baidu-sitemap --save

然后把http://<domain>/baidusitemap.xmlhttp://<domain>/sitemap.xml提交到百度sitemap页面即可。

官网对kubernetes_sd_config的详细说明文档

https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config

kubernetes_sd_config用于配置自动发现的。关于服务自动发现,还可以参考这篇实战:Prometheus kubernetes-cadvisor服务自动发现

Kubernetes SD configurations allow retrieving scrape targets from Kubernetes’ REST API and always staying synchronized with the cluster state.

role的类型有以下几种:node, service, pod, endpointsingress

当role的值为endpoints时,官网说明:

The endpoints role discovers targets from listed endpoints of a service. For each endpoint address one target is discovered per port. If the endpoint is backed by a pod, all additional container ports of the pod, not bound to an endpoint port, are discovered as targets as well.

然后prometheus-additional.yaml的容是:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
- job_name: 'kubernetes-service-endpoints'
kubernetes_sd_configs:
- role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
action: replace
target_label: __scheme__
regex: (https?)
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
action: replace
target_label: __address__
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
- action: labelmap
regex: __meta_kubernetes_service_label_(.+)
- source_labels: [__meta_kubernetes_namespace]
action: replace
target_label: kubernetes_namespace
- source_labels: [__meta_kubernetes_service_name]
action: replace
target_label: kubernetes_name

我理解的逻辑是:

  1. 首先查找ep,svc两个同时存在的
  2. 然后查找svc的annotations标签__meta_kubernetes_service_annotation_prometheus_io_scrape的值为true的。
  3. 最后看到ep里面有两个Addresses,所以显示两个Targets。

1
kubectl get serviceMonitor -n monitoring kubelet -o yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"monitoring.coreos.com/v1","kind":"ServiceMonitor","metadata":{"annotations":{},"labels":{"k8s-app":"kubelet"},"name":"kubelet","namespace":"monitoring"},"spec":{"endpoints":[{"bearerTokenFile":"/var/run/secrets/kubernetes.io/serviceaccount/token","honorLabels":true,"interval":"30s","port":"https-metrics","scheme":"https","tlsConfig":{"insecureSkipVerify":true}},{"bearerTokenFile":"/var/run/secrets/kubernetes.io/serviceaccount/token","honorLabels":true,"interval":"30s","metricRelabelings":[{"action":"drop","regex":"container_(network_tcp_usage_total|network_udp_usage_total|tasks_state|cpu_load_average_10s)","sourceLabels":["__name__"]}],"path":"/metrics/cadvisor","port":"https-metrics","scheme":"https","tlsConfig":{"insecureSkipVerify":true}}],"jobLabel":"k8s-app","namespaceSelector":{"matchNames":["kube-system"]},"selector":{"matchLabels":{"k8s-app":"kubelet"}}}}
creationTimestamp: "2019-12-29T06:40:27Z"
generation: 1
labels:
k8s-app: kubelet
name: kubelet
namespace: monitoring
resourceVersion: "2602"
selfLink: /apis/monitoring.coreos.com/v1/namespaces/monitoring/servicemonitors/kubelet
uid: 327bae9c-d09c-4d00-b293-354870b19f91
spec:
endpoints:
- bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
honorLabels: true
interval: 30s
port: https-metrics
scheme: https
tlsConfig:
insecureSkipVerify: true
- bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
honorLabels: true
interval: 30s
metricRelabelings:
- action: drop
regex: container_(network_tcp_usage_total|network_udp_usage_total|tasks_state|cpu_load_average_10s)
sourceLabels:
- __name__
path: /metrics/cadvisor
port: https-metrics
scheme: https
tlsConfig:
insecureSkipVerify: true
jobLabel: k8s-app
namespaceSelector:
matchNames:
- kube-system
selector:
matchLabels:
k8s-app: kubelet

根据上面的namespaceSelector和selector

1
kubectl get ep -n kube-system -l k8s-app=kubelet -o yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
apiVersion: v1
items:
- apiVersion: v1
kind: Endpoints
metadata:
creationTimestamp: "2019-12-29T06:40:06Z"
labels:
k8s-app: kubelet
name: kubelet
namespace: kube-system
resourceVersion: "2371"
selfLink: /api/v1/namespaces/kube-system/endpoints/kubelet
uid: 326583cc-3379-4883-8426-5a856c29c77e
subsets:
- addresses:
- ip: 172.16.64.232
targetRef:
kind: Node
name: k8s-node1
uid: 09ffae8e-e3f1-456b-a398-7891487712da
- ip: 172.16.64.233
targetRef:
kind: Node
name: k8s-master
uid: 774ddb3e-2c3c-4af8-8e1c-15b0c2c3b8c1
- ip: 172.16.64.235
targetRef:
kind: Node
name: k8s-node2
uid: 7c1cc5f0-2b71-462a-907e-191b0d3ee819
ports:
- name: http-metrics
port: 10255
protocol: TCP
- name: cadvisor
port: 4194
protocol: TCP
- name: https-metrics
port: 10250
protocol: TCP
kind: List
metadata:
resourceVersion: ""
selfLink: ""