Post

[Kubernetes] Install Kubernetes(v1.29.x) using Kubekey(v3.1.1) Artifact on Multipass

[Kubernetes] Install Kubernetes(v1.29.x) using Kubekey(v3.1.1) Artifact on Multipass

offline 설치 위한 artifact 참고

Multipass 접속을 위한 ssh key 생성

1
ssh-keygen -t rsa -b 2048 -f ~/.ssh/id_rsa_multipass

cloud-init 구성

  • cloud-init 생성

    1
    
    vi cloud-init.yaml
    
  • cloud-init 작성

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    
    users:
      - default
      - name: root
        sudo: ALL=(ALL) NOPASSWD:ALL
        ssh_authorized_keys:
          - <content of YOUR public key>
    
      - name: ubuntu
        sudo: ALL=(ALL) NOPASSWD:ALL
        ssh_authorized_keys:
          - <content of YOUR public key>
    
    runcmd:
      - sudo apt-get update
      - sudo timedatectl set-timezone "Asia/Seoul"
      - sudo swapoff -a
      - sudo sed -i "/swap/d" /etc/fstab
      - sudo apt-get install -y conntrack
      - sudo apt-get install -y socat
    

cloud-init의 ssh_authorized_keys 설정을 하지 않았을 시

  • 각 Node의 ~/.ssh 경로의 있는 authorized_keysid_rsa_multipass.pub 내용 붙여넣기

    1
    
    cat $HOME/.ssh/id_rsa_multipass.pub
    
  • root 계정일 때

    1
    2
    3
    4
    5
    
    # root 접속
    sudo -i
    
    # 수정 또는 .ssh 폴더 생성 후 authorized_keys 작성
    vi .ssh/authorized_keys
    

Multipass 생성

  • Repository 생성

    1
    
    multipass launch focal --name kk-repo --memory 8G --disk 100G --cpus 4 --network name=multipass,mode=manual --cloud-init cloud-init.yaml
    
  • Master 생성

    1
    
    multipass launch focal --name kk-master --memory 8G --disk 50G --cpus 4 --network name=multipass,mode=manual --cloud-init cloud-init.yaml
    
  • Worker-1 생성

    1
    
    multipass launch focal --name kk-worker-1 --memory 8G --disk 50G --cpus 4 --network name=multipass,mode=manual --cloud-init cloud-init.yaml
    
  • Worker-2 생성

    1
    
    multipass launch focal --name kk-worker-2 --memory 8G --disk 50G --cpus 4 --network name=multipass,mode=manual --cloud-init cloud-init.yaml
    

Multipass 접속

  • kk-repo

    1
    
    ssh -i $HOME/.ssh/id_rsa_multipass ubuntu@192.168.0.100
    
    1
    
    multipass shell kk-repo
    
  • kk-master

    1
    
    ssh -i $HOME/.ssh/id_rsa_multipass ubuntu@192.168.0.101
    
    1
    
    multipass shell kk-master
    
  • kk-worker-1

    1
    
    ssh -i $HOME/.ssh/id_rsa_multipass ubuntu@192.168.0.102
    
    1
    
    multipass shell kk-worker-1
    
  • kk-worker-2

    1
    
    ssh -i $HOME/.ssh/id_rsa_multipass ubuntu@192.168.0.103
    
    1
    
    multipass shell kk-worker-2
    

각 Node별로 Static IP 설정

1
sudo vi /etc/netplan/50-cloud-init.yaml
  • kk-repo

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    
    network:
        ethernets:
            eth0:
                dhcp4: true
                dhcp6: true
                match:
                    macaddress: 52:54:00:80:6b:21
                set-name: eth0
    --- 추가
            eth1:
                addresses: [192.168.0.100/24]
                gateway4: 192.168.0.1
                dhcp4: no
    ---
        version: 2
    
  • kk-master

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    
    network:
        ethernets:
            eth0:
                dhcp4: true
                dhcp6: true
                match:
                    macaddress: 52:54:00:80:6b:21
                set-name: eth0
    --- 추가
            eth1:
                addresses: [192.168.0.101/24]
                gateway4: 192.168.0.1
                dhcp4: no
    ---
        version: 2
    
  • kk-worker-1

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    
    network:
        ethernets:
            eth0:
                dhcp4: true
                dhcp6: true
                match:
                    macaddress: 52:54:00:80:6b:21
                set-name: eth0
    --- 추가
            eth1:
                addresses: [192.168.0.102/24]
                gateway4: 192.168.0.1
                dhcp4: no
    ---
        version: 2
    
  • kk-worker-2

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    
    network:
        ethernets:
            eth0:
                dhcp4: true
                dhcp6: true
                match:
                    macaddress: 52:54:00:80:6b:21
                set-name: eth0
    --- 추가
            eth1:
                addresses: [192.168.0.103/24]
                gateway4: 192.168.0.1
                dhcp4: no
    ---
        version: 2
    

kubekey artifact 구성 및 설치

script 다운로드

1
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.1.1 sh -

ubuntu-20.04-debs-amd64.iso 다운로드

1
wget https://github.com/kubesphere/kubekey/releases/download/v3.1.1/ubuntu-20.04-debs-amd64.iso

artifact-3.1.1.yaml 작성

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Manifest
metadata:
  name: artifact-v3.1.1
spec:
  arches:
  - amd64
  operatingSystems:
  - arch: amd64
    type: linux
    id: ubuntu
    version: "20.04"
    osImage: Ubuntu 20.04.4 LTS
    repository:
      iso:
        localPath: "/home/ubuntu/kk_install/ubuntu-20.04-debs-amd64.iso"
        # url: "https://github.com/kubesphere/kubekey/releases/download/v3.1.1/ubuntu-20.04-debs-amd64.iso"
  kubernetesDistributions:
  - type: kubernetes
    version: v1.29.3
  components:
    helm:
      version: v3.14.3
    cni:
      version: v1.2.0
    etcd:
      version: v3.5.13
    calicoctl:
      version: v3.27.3
    ## For now, if your cluster container runtime is containerd, KubeKey will add a docker 20.10.8 container runtime in the below list.
    ## The reason is KubeKey creates a cluster with containerd by installing a docker first and making kubelet connect the socket file of containerd which docker contained.
    containerRuntimes:
    - type: docker
      version: 24.0.9
    - type: containerd
      version: 1.7.13
    crictl:
      version: v1.29.0
    docker-registry:
      version: "2"
    harbor:
      version: v2.10.1
    docker-compose:
      version: v2.26.1
  images:
  - docker.io/kubesphere/kube-apiserver:v1.29.3
  - docker.io/kubesphere/kube-controller-manager:v1.29.3
  - docker.io/kubesphere/kube-scheduler:v1.29.3
  - docker.io/kubesphere/kube-proxy:v1.29.3
  - docker.io/kubesphere/pause:3.9
  - docker.io/coredns/coredns:1.9.3
  - docker.io/calico/cni:v3.23.2
  - docker.io/calico/cni:v3.27.3
  - docker.io/calico/kube-controllers:v3.23.2
  - docker.io/calico/kube-controllers:v3.27.3
  - docker.io/calico/node:v3.23.2
  - docker.io/calico/node:v3.27.3
  - docker.io/calico/pod2daemon-flexvol:v3.23.2
  - docker.io/calico/typha:v3.23.2
  - docker.io/kubesphere/flannel:v0.12.0
  - docker.io/openebs/provisioner-localpv:3.3.0
  - docker.io/openebs/linux-utils:3.3.0
  - docker.io/library/haproxy:2.3
  - docker.io/kubesphere/nfs-subdir-external-provisioner:v4.0.2
  - docker.io/kubesphere/k8s-dns-node-cache:1.22.20
  - docker.io/kubesphere/k8s-dns-node-cache:1.15.12
  # https://github.com/kubesphere/ks-installer/releases/download/v3.3.2/images-list.txt
  ##kubesphere-images
  - docker.io/kubesphere/ks-installer:v3.4.1
  - docker.io/kubesphere/ks-apiserver:v3.4.1
  - docker.io/kubesphere/ks-console:v3.4.1
  - docker.io/kubesphere/ks-controller-manager:v3.4.1
  - docker.io/kubesphere/kubectl:v1.22.0
  - docker.io/kubesphere/kubefed:v0.8.1
  - docker.io/kubesphere/tower:v0.2.1
  - docker.io/minio/minio:RELEASE.2019-08-07T01-59-21Z
  - docker.io/minio/mc:RELEASE.2019-08-07T23-14-43Z
  - docker.io/csiplugin/snapshot-controller:v4.0.0
  - docker.io/kubesphere/nginx-ingress-controller:v1.3.1
  - docker.io/mirrorgooglecontainers/defaultbackend-amd64:1.4
  - docker.io/kubesphere/metrics-server:v0.4.2
  - docker.io/library/redis:5.0.14-alpine
  - docker.io/library/haproxy:2.0.25-alpine
  - docker.io/library/alpine:3.14
  - docker.io/osixia/openldap:1.3.0
  - docker.io/kubesphere/netshoot:v1.0
  ##kubeedge-images
  - docker.io/kubeedge/cloudcore:v1.13.0
  - docker.io/kubesphere/iptables-manager:v1.13.0
  - docker.io/kubeedge/iptables-manager:v1.9.2
  - docker.io/kubesphere/edgeservice:v0.3.0
  - docker.io/kubesphere/edgeservice:v0.2.0
  ##gatekeeper-images
  - docker.io/openpolicyagent/gatekeeper:v3.5.2
  ##openpitrix-images
  - docker.io/kubesphere/openpitrix-jobs:v3.3.2
  ##kubesphere-devops-images
  - docker.io/kubesphere/devops-apiserver:ks-v3.4.1
  - docker.io/kubesphere/devops-controller:ks-v3.4.1
  - docker.io/kubesphere/devops-tools:ks-v3.4.1
  - docker.io/kubesphere/ks-jenkins:v3.4.0-2.319.3-1
  - docker.io/jenkins/inbound-agent:4.10-2
  - docker.io/kubesphere/builder-base:v3.2.2
  - docker.io/kubesphere/builder-nodejs:v3.2.0
  - docker.io/kubesphere/builder-maven:v3.2.1-jdk11
  - docker.io/kubesphere/builder-maven:v3.2.0
  - docker.io/kubesphere/builder-python:v3.2.0
  - docker.io/kubesphere/builder-go:v3.2.2-1.18
  - docker.io/kubesphere/builder-go:v3.2.2-1.17
  - docker.io/kubesphere/builder-go:v3.2.2-1.16
  - docker.io/kubesphere/builder-go:v3.2.0
  - docker.io/kubesphere/builder-base:v3.2.2-podman
  - docker.io/kubesphere/builder-nodejs:v3.2.0-podman
  - docker.io/kubesphere/builder-maven:v3.2.1-jdk11-podman
  - docker.io/kubesphere/builder-maven:v3.2.0-podman
  - docker.io/kubesphere/builder-python:v3.2.0-podman
  - docker.io/kubesphere/builder-go:v3.2.0-podman
  - docker.io/kubesphere/builder-go:v3.2.2-1.18-podman
  - docker.io/kubesphere/builder-go:v3.2.2-1.17-podman
  - docker.io/kubesphere/builder-go:v3.2.2-1.16-podman
  - docker.io/kubesphere/s2ioperator:v3.2.1
  - docker.io/kubesphere/s2irun:v3.2.0
  - docker.io/kubesphere/s2i-binary:v3.2.0
  - docker.io/kubesphere/tomcat85-java11-centos7:v3.2.0
  - docker.io/kubesphere/tomcat85-java11-runtime:v3.2.0
  - docker.io/kubesphere/tomcat85-java8-centos7:v3.2.0
  - docker.io/kubesphere/tomcat85-java8-runtime:v3.2.0
  - docker.io/kubesphere/java-11-centos7:v3.2.0
  - docker.io/kubesphere/java-11-runtime:v3.2.0
  - docker.io/kubesphere/java-8-centos7:v3.2.0
  - docker.io/kubesphere/java-8-runtime:v3.2.0
  - docker.io/kubesphere/nodejs-8-centos7:v3.2.0
  - docker.io/kubesphere/nodejs-6-centos7:v3.2.0
  - docker.io/kubesphere/nodejs-4-centos7:v3.2.0
  - docker.io/kubesphere/python-36-centos7:v3.2.0
  - docker.io/kubesphere/python-35-centos7:v3.2.0
  - docker.io/kubesphere/python-34-centos7:v3.2.0
  - docker.io/kubesphere/python-27-centos7:v3.2.0
  - quay.io/argoproj/argocd:v2.3.3
  - quay.io/argoproj/argocd-applicationset:v0.4.1
  - ghcr.io/dexidp/dex:v2.30.2
  - docker.io/library/redis:6.2.6-alpine
  ##kubesphere-monitoring-images
  - docker.io/jimmidyson/configmap-reload:v0.7.1
  - docker.io/prom/prometheus:v2.39.1
  - docker.io/kubesphere/prometheus-config-reloader:v0.55.1
  - docker.io/kubesphere/prometheus-operator:v0.55.1
  - docker.io/kubesphere/kube-rbac-proxy:v0.11.0
  - docker.io/kubesphere/kube-state-metrics:v2.6.0
  - docker.io/prom/node-exporter:v1.3.1
  - docker.io/prom/alertmanager:v0.23.0
  - docker.io/thanosio/thanos:v0.31.0
  - docker.io/grafana/grafana:8.3.3
  - docker.io/kubesphere/kube-rbac-proxy:v0.11.0
  - docker.io/kubesphere/notification-manager-operator:v2.3.0
  - docker.io/kubesphere/notification-manager:v2.3.0
  - docker.io/kubesphere/notification-tenant-sidecar:v3.2.0
  ##kubesphere-logging-images
  - docker.io/kubesphere/elasticsearch-curator:v5.7.6
  - docker.io/kubesphere/opensearch-curator:v0.0.5
  - docker.io/kubesphere/elasticsearch-oss:6.8.22
  - docker.io/opensearchproject/opensearch:2.6.0
  - docker.io/opensearchproject/opensearch-dashboards:2.6.0
  - docker.io/kubesphere/fluentbit-operator:v0.14.0
  - docker.io/library/docker:19.03
  - docker.io/kubesphere/fluent-bit:v1.9.4
  - docker.io/kubesphere/log-sidecar-injector:v1.2.0
  - docker.io/elastic/filebeat:6.7.0
  - docker.io/kubesphere/kube-events-operator:v0.6.0
  - docker.io/kubesphere/kube-events-ruler:v0.6.0
  - docker.io/kubesphere/kube-auditing-operator:v0.2.0
  - docker.io/kubesphere/kube-auditing-webhook:v0.2.0
  ##istio-images
  - docker.io/istio/pilot:1.14.6
  - docker.io/istio/proxyv2:1.14.6
  - docker.io/jaegertracing/jaeger-operator:1.29
  - docker.io/jaegertracing/jaeger-agent:1.29
  - docker.io/jaegertracing/jaeger-collector:1.29
  - docker.io/jaegertracing/jaeger-query:1.29
  - docker.io/jaegertracing/jaeger-es-index-cleaner:1.29
  - docker.io/kubesphere/kiali-operator:v1.50.1
  - docker.io/kubesphere/kiali:v1.50
  # ##example-images
  # - docker.io/library/busybox:1.31.1
  # - docker.io/library/nginx:1.14-alpine
  # - docker.io/joosthofman/wget:1.0
  # - docker.io/nginxdemos/hello:plain-text
  # - docker.io/library/wordpress:4.8-apache
  # - docker.io/mirrorgooglecontainers/hpa-example:latest
  # - docker.io/fluent/fluentd:v1.4.2-2.0
  # - docker.io/library/perl:latest
  # - docker.io/kubesphere/examples-bookinfo-productpage-v1:1.16.2
  # - docker.io/kubesphere/examples-bookinfo-reviews-v1:1.16.2
  # - docker.io/kubesphere/examples-bookinfo-reviews-v2:1.16.2
  # - docker.io/kubesphere/examples-bookinfo-details-v1:1.16.2
  # - docker.io/kubesphere/examples-bookinfo-ratings-v1:1.16.3
  # ##weave-scope-images
  # - docker.io/weaveworks/scope:1.13.0
  registry:
    auths:
      "docker.io":
        username: "username"
        password: "password"

components version 확인(지원하는 version이 없을 시 아래와 같이 Error)

1
2
3
4
Failed to download docker binary: curl -L -o /home/ubuntu/kk_install/kubekey/artifact/docker/20.10.8/amd64/docker-20.10.8.tgz https://download.docker.com/linux/static/stable/x86_64/docker-20.10.8.tgz error: No SHA256 found for docker. 20.10.8 is not supported.
17:40:24 KST failed: [LocalHost]
error: Pipeline[ArtifactExportPipeline] execute failed: Module[ArtifactBinariesModule] exec failed:
failed: [LocalHost] [DownloadBinaries] exec failed after 1 retries: Failed to download docker binary: curl -L -o /home/ubuntu/kk_install/kubekey/artifact/docker/20.10.8/amd64/docker-20.10.8.tgz https://download.docker.com/linux/static/stable/x86_64/docker-20.10.8.tgz error: No SHA256 found for docker. 20.10.8 is not supported.

Components 참고

Export Artifact

1
sudo ./kk artifact export -m artifact-3.1.1.yaml -o artifact-3.1.1.tar.gz

Cluster 설치를 위한 config 파일 생성 및 작성

  • config 파일 생성

    1
    
    sudo ./kk create config --with-kubesphere v3.3.2 --with-kubernetes v1.29.3 -f config-v1.29.3.yaml
    
  • config 파일 편집

    1
    
    vi config-v1.29.3.yaml
    
  • config 파일 작성

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    88
    89
    90
    91
    92
    93
    94
    95
    96
    97
    98
    99
    100
    101
    102
    103
    104
    105
    106
    107
    108
    109
    110
    111
    112
    113
    114
    115
    116
    117
    118
    119
    120
    121
    122
    123
    124
    125
    126
    127
    128
    129
    130
    131
    132
    133
    134
    135
    136
    137
    138
    139
    140
    141
    142
    143
    144
    145
    146
    147
    148
    149
    150
    151
    152
    153
    154
    155
    156
    157
    158
    159
    160
    161
    162
    163
    164
    165
    166
    167
    168
    169
    170
    171
    172
    173
    174
    175
    176
    177
    178
    179
    180
    181
    182
    183
    184
    185
    186
    187
    188
    189
    190
    191
    192
    193
    194
    195
    196
    197
    198
    199
    200
    201
    202
    203
    204
    205
    206
    207
    208
    209
    210
    211
    212
    213
    214
    215
    216
    217
    218
    219
    220
    221
    222
    223
    224
    225
    226
    227
    228
    229
    230
    231
    232
    233
    234
    235
    236
    237
    238
    239
    240
    241
    242
    243
    244
    245
    246
    247
    
    apiVersion: kubekey.kubesphere.io/v1alpha2
    kind: Cluster
    metadata:
      name: sample
    spec:
      hosts:
      - {name: kk-repo, address: 192.168.0.100, internalAddress: 192.168.0.100, privateKeyPath: "/home/ubuntu/.ssh/id_rsa_multipass"}
      - {name: kk-master, address: 192.168.0.101, internalAddress: 192.168.0.101, privateKeyPath: "/home/ubuntu/.ssh/id_rsa_multipass"}
      - {name: kk-worker-1, address: 192.168.0.102, internalAddress: 192.168.0.102, privateKeyPath: "/home/ubuntu/.ssh/id_rsa_multipass"}
      - {name: kk-worker-2, address: 192.168.0.103, internalAddress: 192.168.0.103, privateKeyPath: "/home/ubuntu/.ssh/id_rsa_multipass"}
      roleGroups:
        etcd:
        - kk-master
        control-plane:
        - kk-master
        worker:
        - kk-worker-1
        - kk-worker-2
        registry:
        - kk-repo
      controlPlaneEndpoint:
        ## Internal loadbalancer for apiservers
        # internalLoadbalancer: haproxy
    
        domain: lb.kubesphere.local
        # domain: 192.168.0.101
        address: "192.168.0.101"
        port: 6443
      kubernetes:
        version: v1.29.3
        imageRepo: kubesphere
        clusterName: cluster.local
        masqueradeAll: false
        maxPods: 150
        nodeCidrMaskSize: 24
        proxyMode: ipvs
        autoRenewCerts: true
        containerManager: containerd
        featureGates:
          RotateKubeletServerCertificate: true
        apiserverArgs:
        - default-not-ready-toleration-seconds=30
        - default-unreachable-toleration-seconds=30
        controllerManagerArgs:
        - node-monitor-period=2s
        - node-monitor-grace-period=16s
        kubeletConfiguration:
          nodeStatusUpdateFrequency: 4s
      # etcd:
        # type: kubekey
      network:
        plugin: calico
        calico:
          ipipMode: Always
          vxianMode: Never
          vethMTU: 1440
        kubePodsCIDR: 10.233.64.0/18
        kubeServiceCIDR: 10.233.0.0/18
        ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
        multusCNI:
          enabled: false
      registry:
        type: harbor
        auths:
          "cr.harbor.kubekey.com":
            username: admin
            password: Harbor12345
        privateRegistry: "cr.harbor.kubekey.com"
        namespaceOverride: "kubesphereio"
        registryMirrors: []
        insecureRegistries: ["cr.harbor.kubekey.com"]
      addons: []
    ---
    apiVersion: installer.kubesphere.io/v1alpha1
    kind: ClusterConfiguration
    metadata:
      name: ks-installer
      namespace: kubesphere-system
      labels:
        version: v3.4.1
    spec:
      persistence:
        storageClass: ""
      authentication:
        jwtSecret: ""
      zone: ""
      local_registry: ""
      namespace_override: ""
      # dev_tag: ""
      etcd:
        monitoring: false
        endpointIps: localhost
        port: 2379
        tlsEnable: true
      common:
        core:
          console:
            enableMultiLogin: true
            port: 30880
            type: NodePort
        # apiserver:
        #  resources: {}
        # controllerManager:
        #  resources: {}
        redis:
          enabled: false
          volumeSize: 2Gi
        openldap:
          enabled: false
          volumeSize: 2Gi
        minio:
          volumeSize: 20Gi
        monitoring:
          # type: external
          endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
          GPUMonitoring:
            enabled: false
        gpu:
          kinds:
          - resourceName: "nvidia.com/gpu"
            resourceType: "GPU"
            default: true
        es:
          # master:
          #   volumeSize: 4Gi
          #   replicas: 1
          #   resources: {}
          # data:
          #   volumeSize: 20Gi
          #   replicas: 1
          #   resources: {}
          logMaxAge: 7
          elkPrefix: logstash
          basicAuth:
            enabled: false
            username: ""
            password: ""
          externalElasticsearchHost: ""
          externalElasticsearchPort: ""
      alerting:
        enabled: false
        # thanosruler:
        #   replicas: 1
        #   resources: {}
      auditing:
        enabled: false
        # operator:
        #   resources: {}
        # webhook:
        #   resources: {}
      devops:
        enabled: false
        # resources: {}
        jenkinsMemoryLim: 8Gi
        jenkinsMemoryReq: 4Gi
        jenkinsVolumeSize: 8Gi
      events:
        enabled: false
        # operator:
        #   resources: {}
        # exporter:
        #   resources: {}
        # ruler:
        #   enabled: true
        #   replicas: 2
        #   resources: {}
      logging:
        enabled: false
        logsidecar:
          enabled: true
          replicas: 2
          # resources: {}
      metrics_server:
        enabled: false
      monitoring:
        storageClass: ""
        node_exporter:
          port: 9100
          # resources: {}
        # kube_rbac_proxy:
        #   resources: {}
        # kube_state_metrics:
        #   resources: {}
        # prometheus:
        #   replicas: 1
        #   volumeSize: 20Gi
        #   resources: {}
        #   operator:
        #     resources: {}
        # alertmanager:
        #   replicas: 1
        #   resources: {}
        # notification_manager:
        #   resources: {}
        #   operator:
        #     resources: {}
        #   proxy:
        #     resources: {}
        gpu:
          nvidia_dcgm_exporter:
            enabled: false
            # resources: {}
      multicluster:
        clusterRole: none
      network:
        networkpolicy:
          enabled: false
        ippool:
          type: none
        topology:
          type: none
      openpitrix:
        store:
          enabled: false
      servicemesh:
        enabled: false
        istio:
          components:
            ingressGateways:
            - name: istio-ingressgateway
              enabled: false
            cni:
              enabled: false
      edgeruntime:
        enabled: false
        kubeedge:
          enabled: false
          cloudCore:
            cloudHub:
              advertiseAddress:
                - ""
            service:
              cloudhubNodePort: "30000"
              cloudhubQuicNodePort: "30001"
              cloudhubHttpsNodePort: "30002"
              cloudstreamNodePort: "30003"
              tunnelNodePort: "30004"
            # resources: {}
            # hostNetWork: false
          iptables-manager:
            enabled: true
            mode: "external"
            # resources: {}
          # edgeService:
          #   resources: {}
      terminal:
        timeout: 600
    

Repo에서 각 Node 접속을 위해 id_rsa_multipass 파일 복사

1
multipass copy-files $HOME/.ssh/id_rsa_multipass kk-repo:/home/ubuntu/.ssh/id_rsa_multipass

Registry 설치

1
sudo ./kk init registry -f config-v1.29.3.yaml -a artifact-3.1.1.tar.gz

harbor 주소 : [harbor 설치한 ip]:80

[ERROR] ssh error

  • 각 node 별로 ssh가 안될시 root passwd가 맞지 않아 발생함.
  • Multipass에서 vm이 생성되면 root 비번을 설정해줘야 하는 듯
    1
    
    sudo passwd root
    

Harbor 인증서 복사 및 업데이트 (harbor curl: (60) SSL certificate problem: unable to get local issuer certificate)

인증서 업데이트를 하지 않았을 시, 아래와 같이 Error

1
2
3
[WARNING ImagePull]: failed to pull image cr.harbor.kubekey.com/kubesphereio/kube-apiserver:v1.29.3: output: E0501 22:53:12.616927    4525 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"cr.harbor.kubekey.com/kubesphereio/kube-apiserver:v1.29.3\": failed to resolve reference \"cr.harbor.kubekey.com/kubesphereio/kube-apiserver:v1.29.3\": failed to do request: Head \"https://cr.harbor.kubekey.com:443/v2/kubesphereio/kube-apiserver/manifests/v1.29.3\": tls: failed to verify certificate: x509: certificate signed by unknown authority" image="cr.harbor.kubekey.com/kubesphereio/kube-apiserver:v1.29.3"
time="2025-05-01T22:53:12+09:00" level=fatal msg="pulling image: failed to pull and unpack image \"cr.harbor.kubekey.com/kubesphereio/kube-apiserver:v1.29.3\": failed to resolve reference \"cr.harbor.kubekey.com/kubesphereio/kube-apiserver:v1.29.3\": failed to do request: Head \"https://cr.harbor.kubekey.com:443/v2/kubesphereio/kube-apiserver/manifests/v1.29.3\": tls: failed to verify certificate: x509: certificate signed by unknown authority"
, error: exit status 1

Repo 및 각 Node의 인증서 복사

1
2
3
4
sudo cp /etc/docker/certs.d/cr.harbor.kubekey.com/ca.crt /usr/local/share/ca-certificates/harbor-ca.crt
sudo scp -i /home/ubuntu/.ssh/id_rsa_multipass /usr/local/share/ca-certificates/harbor-ca.crt root@192.168.0.101:/usr/local/share/ca-certificates/harbor-ca.crt
sudo scp -i /home/ubuntu/.ssh/id_rsa_multipass /usr/local/share/ca-certificates/harbor-ca.crt root@192.168.0.102:/usr/local/share/ca-certificates/harbor-ca.crt
sudo scp -i /home/ubuntu/.ssh/id_rsa_multipass /usr/local/share/ca-certificates/harbor-ca.crt root@192.168.0.103:/usr/local/share/ca-certificates/harbor-ca.crt

각 Node 별로 인증서 업데이트

1
sudo update-ca-certificates

인증서 적용 확인

1
ls -lrt /etc/ssl/certs
1
2
- harbor-ca.pem -> /usr/local/share/ca-certificates/harbor-ca.crt
- ca-certificates.crt

Container Restart

1
sudo systemctl restart containerd

Harbor Project 생성

Sample Bash 파일 다운로드

1
curl -O https://raw.githubusercontent.com/kubesphere/ks-installer/master/scripts/create_project_harbor.sh

Harbor 프로젝트 수정 및 url 수정(https://dockerhub.kubekey.local)

  • 파일 편집

    1
    
    vi create_project_harbor.sh
    
  • url 수정(https://dockerhub.kubekey.local)

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    
    #!/usr/bin/env bash
    
    # Copyright 2018 The KubeSphere Authors.
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    #     http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    
    url="https://cr.harbor.kubekey.com"  #Change the value of url to https://cr.harbor.kubekey.com.
    user="admin"
    passwd="Harbor12345"
    
    harbor_projects=(library
        kubesphereio
        kubesphere
        argoproj
        calico
        coredns
        openebs
        csiplugin
        minio
        mirrorgooglecontainers
        osixia
        prom
        thanosio
        jimmidyson
        grafana
        elastic
        istio
        jaegertracing
        jenkins
        weaveworks
        openpitrix
        joosthofman
        nginxdemos
        fluent
        kubeedge
        openpolicyagent
    )
    
    for project in "${harbor_projects[@]}"; do
        echo "creating $project"
        curl -u "${user}:${passwd}" -X POST -H "Content-Type: application/json" "${url}/api/v2.0/projects" -d "{ \"project_name\": \"${project}\", \"public\": true}" -k #Add -k at the end of the curl command.
    done
    

파일 권한 변경

1
chmod +x create_project_harbor.sh

실행

1
./create_project_harbor.sh

Cluster 설치

1
sudo ./kk create cluster -f config-v1.29.3.yaml -a artifact-3.1.1.tar.gz

Install operating system packages

1
sudo ./kk create cluster -f config-v1.29.3.yaml -a artifact-3.1.1.tar.gz --with-packages

image 별도로 push 방법

1
sudo ./kk artifact image push -f config-v1.29.3.yaml -a artifact-3.1.1.tar.gz

--skip-push-images를 추가하면 harbor에 image를 push하는 과정으로 생략할 수 있다.

1
sudo ./kk create cluster -f config-v1.29.3.yaml -a artifact-3.1.1.tar.gz --skip-push-images

[ERROR] Harbor에 image push 할 때 Unauthorized 에러 발생 때

  • 다시 로그인
    1
    2
    
    docker login [your.host.com]:port -u username -p password
    sudo docker login https://cr.harbor.kubekey.com -u admin -p Harbor12345
    

kubekey command 참고

Cluster 설치하면서 log 확인

1
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f

Kubernetes 일반 유저 일 때

1
2
3
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

만약 일반 계정에서 아래와 sudo 명령어 없이 kubectl 명령어 사용시 아래와 같은 오류가 발생하면

  • [ERROR] error loading config file /etc/kubernetes/admin.conf: open /etc/kubernetes/admin.conf: permission denied
    • 아래 명령어를 입력하면 sudo 없이 사용 가능하다.
      1
      
      export KUBECONFIG=$HOME/.kube/config
      

[ERROR] error making pod data directories: mkdir /var/lib/kubelet/pods/86cfe394-ba32-4a9f-ad65-1fb21f98a4ba: read-only file system

1
2
3
chown -R kubelet:kubelet /var/lib/kubelet/pods
chmod 750 /var/lib/kubelet/pods
systemctl restart kubelet

Cluster 설치 완료

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
#####################################################
###              Welcome to KubeSphere!           ###
#####################################################

Console: http://192.168.0.101:30880
Account: admin
Password: P@88w0rd
NOTES:
  1. After you log into the console, please check the
     monitoring status of service components in
     "Cluster Management". If any service is not
     ready, please wait patiently until all components
     are up and running.
  2. Please change the default password after login.

#####################################################
https://kubesphere.io             2025-05-01 22:32:53
#####################################################
22:32:54 KST success: [kk-master]
22:32:54 KST Pipeline[CreateClusterPipeline] execute successfully
Installation is complete.

Please check the result using the command:

        kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f

설치 후, 이상 증세

Cluster 재시작 이후, 연결이 되지 않을 때

1
Unable to connect to the server: dial tcp: lookup lb.kubesphere.local on 127.0.0.53:53: server misbehaving
  • kk-master, kk-worker-1, kk-worker-2 각 Node의 /etc/hosts 수정

    1
    
    sudo vi /etc/hosts
    
  • 아래와 같이 추가

    • 192.168.0.100 cr.harbor.kubekey.com
    • 192.168.0.101 lb.kubesphere.local

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      
      # Your system has configured 'manage_etc_hosts' as True.
      # As a result, if you wish for changes to this file to persist
      # then you will need to either
      # a.) make changes to the master file in /etc/cloud/templates/hosts.debian.tmpl
      # b.) change or remove the value of 'manage_etc_hosts' in
      #     /etc/cloud/cloud.cfg or cloud-config from user-data
      #
      127.0.1.1 kk-worker-1 kk-worker-1
      127.0.0.1 localhost
      
      ## 추가
      192.168.0.100 cr.harbor.kubekey.com
      192.168.0.101 lb.kubesphere.local
      ##
      
      # The following lines are desirable for IPv6 capable hosts
      ::1 localhost ip6-localhost ip6-loopback
      ff02::1 ip6-allnodes
      ff02::2 ip6-allrouters
      

다만, 재시작하면 다시 초기화 됨

  • /etc/cloud/templates/hosts.debian.tmpl 해당 파일을 수정해야 다시 시작하더라도 변경됨

    1
    
    sudo vi /etc/cloud/templates/hosts.debian.tmpl
    
  • 아래와 같이 추가

    • 192.168.0.100 cr.harbor.kubekey.com
    • 192.168.0.101 lb.kubesphere.local

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      
      # Your system has configured 'manage_etc_hosts' as True.
      # As a result, if you wish for changes to this file to persist
      # then you will need to either
      # a.) make changes to the master file in /etc/cloud/templates/hosts.debian.tmpl
      # b.) change or remove the value of 'manage_etc_hosts' in
      #     /etc/cloud/cloud.cfg or cloud-config from user-data
      #
      127.0.1.1 kk-worker-1 kk-worker-1
      127.0.0.1 localhost
      
      ## 추가
      192.168.0.100 cr.harbor.kubekey.com
      192.168.0.101 lb.kubesphere.local
      ##
      
      # The following lines are desirable for IPv6 capable hosts
      ::1 localhost ip6-localhost ip6-loopback
      ff02::1 ip6-allnodes
      ff02::2 ip6-allrouters
      

Harbor Login 실패 또는 Cluster에서 Image를 가져오지 못할 때

  • 아래와 같이 docker-compose 재시작

    1
    2
    3
    
    sudo -i
    cd /opt/harbor
    docker-compose restart
    
This post is licensed under CC BY 4.0 by the author.