[Kubernetes] Install Alloy(v1.7.1) Using Helm Chart
[Kubernetes] Install Alloy(v1.7.1) Using Helm Chart
Install the Helm charts
namespace 생성
1
kubectl create namespace [NAMESPACE NAME]
Alloy 배포
1 2 3
helm repo add grafana https://grafana.github.io/helm-charts helm repo update helm install --namespace <NAMESPACE> <RELEASE_NAME> grafana/alloy
Alloy - Helm 설치 참고
Customize Default Configuration
values.yaml 수정
최상위 values.yaml을 수정하면 하위 폴더 values.yaml을 override 한다.
- Release file (.tgz)
kafka 연결
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
...✂...
alloy:
configMap:
# -- Create a new ConfigMap for the config file.
create: true
# -- Content to assign to the new ConfigMap. This is passed into `tpl` allowing for templating from values.
content: |
loki.source.kafka "raw" {
brokers = ["kafka:9092"]
topics = ["loki"]
forward_to = [loki.write.http.receiver]
relabel_rules = loki.relabel.kafka.rules
version = "2.0.0"
labels = {service_name = "raw_kafka"}
}
loki.relabel "kafka" {
forward_to = [loki.write.http.receiver]
rule {
source_labels = ["__meta_kafka_topic"]
target_label = "topic"
}
}
loki.write "http" {
endpoint {
url = "http://loki:3100/loki/api/v1/push"
}
}
# -- Name of existing ConfigMap to use. Used when create is false.
name: null
# -- Key in ConfigMap to get config from.
key: null
...✂...
참고 - https://grafana.com/docs/loki/latest/send-data/alloy/examples/alloy-kafka-logs/
Kafka를 통해 OpenTelemetry logs 수집
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
...✂...
alloy:
configMap:
# -- Create a new ConfigMap for the config file.
create: true
# -- Content to assign to the new ConfigMap. This is passed into `tpl` allowing for templating from values.
content: |
loki.source.kafka "raw" {
brokers = ["kafka:9092"]
topics = ["loki"]
forward_to = [loki.write.http.receiver]
relabel_rules = loki.relabel.kafka.rules
version = "2.0.0"
labels = {service_name = "raw_kafka"}
}
loki.relabel "kafka" {
forward_to = [loki.write.http.receiver]
rule {
source_labels = ["__meta_kafka_topic"]
target_label = "topic"
}
}
loki.write "http" {
endpoint {
url = "http://loki:3100/loki/api/v1/push"
}
}
otelcol.receiver.kafka "default" {
brokers = ["kafka:9092"]
protocol_version = "2.0.0"
topic = "otlp"
encoding = "otlp_proto"
output {
logs = [otelcol.processor.batch.default.input]
}
}
otelcol.processor.batch "default" {
output {
logs = [otelcol.exporter.otlphttp.default.input]
}
}
otelcol.exporter.otlphttp "default" {
client {
endpoint = "http://loki:3100/otlp"
}
}
# -- Name of existing ConfigMap to use. Used when create is false.
name: null
# -- Key in ConfigMap to get config from.
key: null
...✂...
참고 - https://grafana.com/docs/loki/latest/send-data/alloy/examples/alloy-kafka-logs/
Kubernetes Pods logs 연결
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
...✂...
alloy:
configMap:
# -- Create a new ConfigMap for the config file.
create: true
# -- Content to assign to the new ConfigMap. This is passed into `tpl` allowing for templating from values.
content: |
loki.write "http" {
endpoint {
url = "http://loki:3100/loki/api/v1/push"
}
}
// discovery.kubernetes allows you to find scrape targets from Kubernetes resources.
// It watches cluster state and ensures targets are continually synced with what is currently running in your cluster.
discovery.kubernetes "pod" {
role = "pod"
}
// discovery.relabel rewrites the label set of the input targets by applying one or more relabeling rules.
// If no rules are defined, then the input targets are exported as-is.
discovery.relabel "pod_logs" {
targets = discovery.kubernetes.pod.targets
// Label creation - "namespace" field from "__meta_kubernetes_namespace"
rule {
source_labels = ["__meta_kubernetes_namespace"]
action = "replace"
target_label = "namespace"
}
// Label creation - "pod" field from "__meta_kubernetes_pod_name"
rule {
source_labels = ["__meta_kubernetes_pod_name"]
action = "replace"
target_label = "pod"
}
// Label creation - "container" field from "__meta_kubernetes_pod_container_name"
rule {
source_labels = ["__meta_kubernetes_pod_container_name"]
action = "replace"
target_label = "container"
}
// Label creation - "app" field from "__meta_kubernetes_pod_label_app_kubernetes_io_name"
rule {
source_labels = ["__meta_kubernetes_pod_label_app_kubernetes_io_name"]
action = "replace"
target_label = "app"
}
// Label creation - "job" field from "__meta_kubernetes_namespace" and "__meta_kubernetes_pod_container_name"
// Concatenate values __meta_kubernetes_namespace/__meta_kubernetes_pod_container_name
rule {
source_labels = ["__meta_kubernetes_namespace", "__meta_kubernetes_pod_container_name"]
action = "replace"
target_label = "job"
separator = "/"
replacement = "$1"
}
// Label creation - "container" field from "__meta_kubernetes_pod_uid" and "__meta_kubernetes_pod_container_name"
// Concatenate values __meta_kubernetes_pod_uid/__meta_kubernetes_pod_container_name.log
rule {
source_labels = ["__meta_kubernetes_pod_uid", "__meta_kubernetes_pod_container_name"]
action = "replace"
target_label = "__path__"
separator = "/"
replacement = "/var/log/pods/*$1/*.log"
}
// Label creation - "container_runtime" field from "__meta_kubernetes_pod_container_id"
rule {
source_labels = ["__meta_kubernetes_pod_container_id"]
action = "replace"
target_label = "container_runtime"
regex = "^(\\S+):\\/\\/.+$"
replacement = "$1"
}
}
// loki.source.kubernetes tails logs from Kubernetes containers using the Kubernetes API.
loki.source.kubernetes "pod_logs" {
targets = discovery.relabel.pod_logs.output
forward_to = [loki.process.pod_logs.receiver]
}
// loki.process receives log entries from other Loki components, applies one or more processing stages,
// and forwards the results to the list of receivers in the component's arguments.
loki.process "pod_logs" {
stage.static_labels {
values = {
cluster = "<CLUSTER_NAME>",
}
}
forward_to = [loki.write.<WRITE_COMPONENT_NAME>.receiver]
}
# -- Name of existing ConfigMap to use. Used when create is false.
name: null
# -- Key in ConfigMap to get config from.
key: null
...✂...
참고 - https://grafana.com/docs/alloy/latest/collect/logs-in-kubernetes/#pods-logs
외부 접속을 위한 NodePort 설정
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
...✂...
service:
# -- Creates a Service for the controller's pods.
enabled: true
# -- Service type
type: NodePort
# -- NodePort port. Only takes effect when `service.type: NodePort`
nodePort: 31128
# -- Cluster IP, can be set to None, empty "" or an IP address
clusterIP: ''
# -- Value for internal traffic policy. 'Cluster' or 'Local'
internalTrafficPolicy: Cluster
annotations: {}
# cloud.google.com/load-balancer-type: Internal
...✂...
외부 접속을 위한 Ingress 설정
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
...✂...
ingress:
# -- Enables ingress for Alloy (Faro port)
enabled: true
# For Kubernetes >= 1.18 you should specify the ingress-controller via the field ingressClassName
# See https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#specifying-the-class-of-an-ingress
# ingressClassName: nginx
# Values can be templated
annotations:
{}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
labels: {}
path: /
faroPort: 12345
# pathType is only for k8s >= 1.1=
pathType: Prefix
hosts:
- chart-example.local
## Extra paths to prepend to every host configuration. This is useful when working with annotation based services.
extraPaths: []
# - path: /*
# backend:
# serviceName: ssl-redirect
# servicePort: use-annotation
## Or for k8s > 1.19
# - path: /*
# pathType: Prefix
# backend:
# service:
# name: ssl-redirect
# port:
# name: use-annotation
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
...✂...
Install Customize Default Configuration
1
helm install -n <NAMESPACE> [RELEASE NAME] [Chart.yaml 경로] -f [YAML 파일 또는 URL에 값 지정 (여러 개를 지정가능)]
1
helm install --namespace <NAMESPACE> [RELEASE NAME] grafana/alloy -f override-values.yaml
Uninstall the Chart
1
helm uninstall [RELEASE NAME] -n [NAMESPACE NAME]
This post is licensed under CC BY 4.0 by the author.