RKE2란?

RKE2는 Rancher에서 개발한 엔터프라이즈급 Kubernetes 배포판으로,

특히 미국 연방 정부 수준의 보안 및 규정 준수를 목표로 설계된 보안 중심 K8S입니다.

  • 보안 및 규정 준수
    • 기본 설정만으로 CIS Kubernetes Benchmark v1.7 / v1.8 통과 가능
    • FIPS 140-2 규정 준수 지원
    • 빌드 파이프라인에서 Trivy 기반 CVE 정기 점검
  • 아키텍처 특징
    • 컨트롤 플레인 컴포넌트를 kubelet이 관리하는 Static Pod로 실행
    • 기본 컨테이너 런타임은 containerd
    • 불필요한 구성 요소를 제거한 하드닝된 Kubernetes
  • 운영방식
    • 단독 실행 가능
    • Rancher 플랫폼과 통합 운영 가능
  • 구성요소
    • K8s
      • API Server , Controller Manager , Scheduler
      • Proxy , Kubelet
    • etcd
    • runc
    • containerd/cri
    • CNI: Canal
    • CoreDNS
    • Ingress NGINX Controller and/or Traefik
    • Metrics Server
    • Helm

RKE2 실습

실습환경배포

PS C:\Users\bom\Desktop\스터디\7week\k8s-rke2> pwd

Path
----
C:\Users\bom\Desktop\스터디\7week\k8s-rke2

PS C:\Users\bom\Desktop\스터디\7week\k8s-rke2> vagrant up
Bringing machine 'k8s-node1' up with 'virtualbox' provider...
Bringing machine 'k8s-node2' up with 'virtualbox' provider...
==> k8s-node1: Preparing master VM for linked clones...
    k8s-node1: This is a one time operation. Once the master VM is prepared,
    k8s-node1: it will be used as a base for linked clones, making the creation
    k8s-node1: of new VMs take milliseconds on a modern system.
==> k8s-node1: Importing base box 'bento/rockylinux-9'...

###################중략####################

PS C:\Users\bom\Desktop\스터디\7week\k8s-rke2> vagrant status
Current machine states:

k8s-node1                 running (virtualbox)
k8s-node2                 running (virtualbox)

설치

  • RKE 서버 노드 설치

      PS C:\Users\bom\Desktop\스터디\7week\k8s-rke2> vagrant ssh k8s-node1
    
      This system is built by the Bento project by Chef Software
      More information can be found at https://github.com/chef/bento
    
      Use of this system is acceptance of the OS vendor EULA and License Agreements.
      [root@k8s-node1 ~]# vi install.sh
      [root@k8s-node1 ~]# ll
      total 28
      -rw-r--r--. 1 root root 25291 Feb 18 14:14 install.sh
      [root@k8s-node1 ~]# chmod +x install.sh
      [root@k8s-node1 ~]# INSTALL_RKE2_CHANNEL=v1.33 ./install.sh
    
      [root@k8s-node1 ~]# rke2 --version
      rke2 version v1.33.8+rke2r1 (eb75e3c1774cee5a584259d6fee77eb8cfa9b430)
      go version go1.24.12 X:boringcrypto
    
      [root@k8s-node1 ~]# dnf repolist
      repo id                                                              repo name
      appstream                                                            Rocky Linux 9 - AppStream
      baseos                                                               Rocky Linux 9 - BaseOS
      extras                                                               Rocky Linux 9 - Extras
      rancher-rke2-1.33-stable                                             Rancher RKE2 1.33 (v1.33)
      rancher-rke2-common-stable                                           Rancher RKE2 Common (v1.33)
    
      [root@k8s-node1 ~]# cat /etc/yum.repos.d/rancher-rke2.repo
      [rancher-rke2-common-stable]
      name=Rancher RKE2 Common (v1.33)
      baseurl=https://rpm.rancher.io/rke2/stable/common/centos/9/noarch
      enabled=1
      gpgcheck=1
      repo_gpgcheck=1
      gpgkey=https://rpm.rancher.io/public.key
      [rancher-rke2-1.33-stable]
      name=Rancher RKE2 1.33 (v1.33)
      baseurl=https://rpm.rancher.io/rke2/stable/1.33/centos/9/x86_64
      enabled=1
      gpgcheck=1
      repo_gpgcheck=1
      gpgkey=https://rpm.rancher.io/public.key
    
      [root@k8s-node1 ~]# cat /etc/rancher/rke2/config.yaml
      write-kubeconfig-mode: "0644"
    
      debug: true
    
      cni: canal
    
      bind-address: 192.168.10.11
      advertise-address: 192.168.10.11
      node-ip: 192.168.10.11
    
      disable-cloud-controller: true
    
      disable:
        - servicelb
        - rke2-coredns-autoscaler
        - rke2-ingress-nginx
        - rke2-snapshot-controller
        - rke2-snapshot-controller-crd
        - rke2-snapshot-validation-webhook
    
    
#모니터링

Every 2.0s: pstree -a                                                                                  k8s-node1: Wed Feb 18 14:18:15 2026

systemd --switched-root --system --deserialize 31
  |-NetworkManager --no-daemon
  |   `-2*[{NetworkManager}]
  |-VBoxDRMClient
  |   `-4*[{VBoxDRMClient}]
  |-VBoxService --pidfile /var/run/vboxadd-service.sh
  |   `-8*[{VBoxService}]
  |-agetty -o -p -- \\u --noclear - linux

[root@k8s-node1 ~]# journalctl -u rke2-server -f
Feb 18 14:17:57 k8s-node1 systemd[1]: Starting Rancher Kubernetes Engine v2 (server)...
Feb 18 14:17:57 k8s-node1 rke2[6907]: time="2026-02-18T14:17:57+09:00" level=warning msg="not running in CIS mode"
Feb 18 14:17:57 k8s-node1 rke2[6907]: time="2026-02-18T14:17:57+09:00" level=info msg="Applying Pod Security Admission Configuration"
Feb 18 14:17:57 k8s-node1 rke2[6907]: time="2026-02-18T14:17:57+09:00" level=info msg="Starting rke2 v1.33.8+rke2r1 (eb75e3c1774cee5a584259d6fee77eb8cfa9b430)"

[root@k8s-node1 ~]# systemctl status rke2-server --no-pager
● rke2-server.service - Rancher Kubernetes Engine v2 (server)
     Loaded: loaded (/usr/lib/systemd/system/rke2-server.service; enabled; preset: disabled)
     Active: active (running) since Wed 2026-02-18 14:21:54 KST; 3s ago
       Docs: https://github.com/rancher/rke2#readme
    Process: 6905 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
    Process: 6906 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)

    [root@k8s-node1 ~]# k get pod -A
NAMESPACE     NAME                                         READY   STATUS             RESTARTS      AGE
kube-system   etcd-k8s-node1                               1/1     Running            0             4m27s
kube-system   helm-install-rke2-canal-4sndc                0/1     Completed          0             2m2s
kube-system   helm-install-rke2-coredns-ctxlg              0/1     Completed          0             2m2s
kube-system   helm-install-rke2-metrics-server-hklqz       0/1     Completed          0             2m2s
kube-system   helm-install-rke2-runtimeclasses-57qwm       0/1     Completed          0             2m2s
kube-system   kube-apiserver-k8s-node1                     1/1     Running            0             4m27s
kube-system   kube-controller-manager-k8s-node1            1/1     Running            0             3m49s
kube-system   kube-proxy-k8s-node1                         1/1     Running            0             4m27s
kube-system   kube-scheduler-k8s-node1                     1/1     Running            0             3m49s
kube-system   rke2-canal-vwnxr                             1/2     CrashLoopBackOff   3 (39s ago)   112s
kube-system   rke2-coredns-rke2-coredns-559595db99-sgwf7   1/1     Running            0             114s
kube-system   rke2-metrics-server-fdcdf575d-6tb9x          1/1     Running            0             83s
```

![image.png](attachment:101ce327-1607-4334-965b-17744799c8dd:image.png)
  • rke2디렉터리 : addon helm chart,인증서

      [root@k8s-node1 ~]# tree /var/lib/rancher/rke2 -L 1
      /var/lib/rancher/rke2
      ├── agent
      ├── bin -> /var/lib/rancher/rke2/data/v1.33.8-rke2r1-1b2872361ec5/bin
      ├── data
      └── server
    
      4 directories, 0 files
    
      [root@k8s-node1 ~]# tree /var/lib/rancher/rke2/server/ -L 1
      /var/lib/rancher/rke2/server/
      ├── agent-token -> /var/lib/rancher/rke2/server/token
      ├── cred
      ├── db
      ├── etc
      ├── manifests
      ├── node-token -> /var/lib/rancher/rke2/server/token
      ├── tls
      └── token
    
      5 directories, 3 files
    
      [root@k8s-node1 ~]# cat /var/lib/rancher/rke2/server/token
      K106e21a8fb999718d131eb1dce5e2a55218ca993338cdff4de2c72137794588cb2::server:56d69e65a53934b4010ecd46aafe8722
      [root@k8s-node1 ~]# kubectl get crd | grep -E 'helm|addon'
      addons.k3s.cattle.io                                    2026-02-18T05:20:08Z
      helmchartconfigs.helm.cattle.io                         2026-02-18T05:20:08Z
      helmcharts.helm.cattle.io                               2026-02-18T05:20:08Z
    
      [root@k8s-node1 ~]# cat /var/lib/rancher/rke2/agent/etc/crictl.yaml
      runtime-endpoint: unix:///run/k3s/containerd/containerd.sock
    
      [root@k8s-node1 ~]# crictl images
      IMAGE                                           TAG                            IMAGE ID            SIZE
      docker.io/rancher/hardened-calico               v3.31.3-build20260206          952c7453d7729       233MB
      docker.io/rancher/hardened-coredns              v1.14.1-build20260206          ad73f5d41fead       30.3MB
      docker.io/rancher/hardened-etcd                 v3.5.26-k3s1-build20260126     4b6a4b1e13f3e       18.7MB
      docker.io/rancher/hardened-flannel              v0.28.1-build20260206          170c73d2937d4       23MB
      docker.io/rancher/hardened-k8s-metrics-server   v0.8.1-build20260206           7a4190a831781       21.4MB
      docker.io/rancher/hardened-kubernetes           v1.33.8-rke2r1-build20260210   ceea5f6055309       201MB
      docker.io/rancher/klipper-helm                  v0.9.14-build20260210          2e2c8cbfb79b8       64.1MB
      docker.io/rancher/mirrored-pause                3.6                            6270bb605e12e       301kB
      docker.io/rancher/rke2-runtime                  v1.33.8-rke2r1                 b1d89cfc44e65       100MB
    
      #보안설정 확인
      [root@k8s-node1 ~]# cat /var/lib/rancher/rke2/agent/etc/kubelet.conf.d/00-rke2-defaults.conf
      address: 192.168.10.11
      apiVersion: kubelet.config.k8s.io/v1beta1
      authentication:
        anonymous:
          enabled: false
        webhook:
          cacheTTL: 2m0s
          enabled: true
        x509:
          clientCAFile: /var/lib/rancher/rke2/agent/client-ca.crt
      authorization:
        mode: Webhook
        webhook:
          cacheAuthorizedTTL: 5m0s
          cacheUnauthorizedTTL: 30s
      cgroupDriver: systemd
      clusterDNS:
      - 10.43.0.10
      clusterDomain: cluster.local
      containerRuntimeEndpoint: unix:///run/k3s/containerd/containerd.sock
      cpuManagerReconcilePeriod: 10s
      crashLoopBackOff: {}
    
      ***failSwapOn: false**                  # 원래 쿠버네티스는 스왑(Swap) 메모리가 켜져 있으면 실행되지 않지만, RKE2는 이를 허용하도록 설정되어 있습니다.
      **nodeStatusUpdateFrequency: 10s**     # 10초마다 "나 살아있어요"라고 컨트롤플레인 노드에 보고
      **serializeImagePulls: false**         # 이미지 병렬 pull 허용*
    
      ##중략###

노드관리

  • 워커 노드 추가

    • 토큰확인

        [root@k8s-node1 ~]# cat /var/lib/rancher/rke2/server/node-token
        K106e21a8fb999718d131eb1dce5e2a55218ca993338cdff4de2c72137794588cb2::server:56d69e65a53934b4010ecd46aafe8722
      
        [root@k8s-node1 ~]# ss -tnlp | grep 9345
        LISTEN 0      4096       127.0.0.1:9345       0.0.0.0:*    users:(("rke2",pid=6907,fd=7))
        LISTEN 0      4096   192.168.10.11:9345       0.0.0.0:*    users:(("rke2",pid=6907,fd=6))
        LISTEN 0      4096           [::1]:9345          [::]:*    users:(("rke2",pid=6907,fd=8))
    • 워커 노드 추가

        [root@k8s-node2 ~]# curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE="agent" INSTALL_RKE2_CHANNEL=v1.33 sh -
        [INFO]  using stable RPM repositories
        [INFO]  using 1.33 series from channel stable
        Rancher RKE2 Common (v1.33)                                                                               2.1 kB/s | 659  B     00:00
        Rancher RKE2 Common (v1.33)                                                                               8.0 kB/s | 2.4 kB     00:00
        Importing GPG key 0xE257814A:
         Userid     : "Rancher (CI) <ci@rancher.com>"
         Fingerprint: C8CF F216 4551 26E9 B9C9 18BE 925E A29A E257 814A
         From       : https://rpm.rancher.io/public.key
        Rancher RKE2 Common (v1.33)                                                                               3.6 kB/s | 2.6 kB     00:00
        Rancher RKE2 1.33 (v1.33)                                                                                 2.3 kB/s | 659  B     00:00
        Rancher RKE2 1.33 (v1.33)                                                                                  19 kB/s | 2.4 kB     00:00
        Importing GPG key 0xE257814A:
         Userid     : "Rancher (CI) <ci@rancher.com>"
         Fingerprint: C8CF F216 4551 26E9 B9C9 18BE 925E A29A E257 814A
         From       : https://rpm.rancher.io/public.key
        Rancher RKE2 1.33 (v1.33)                                                                                 8.4 kB/s | 5.9 kB     00:00
        Dependencies resolved.
        ==========================================================================================================================================
         Package                      Architecture           Version                             Repository                                  Size
        ==========================================================================================================================================
        Installing:
         rke2-agent                   x86_64                 1.33.8~rke2r1-0.el9                 rancher-rke2-1.33-stable                   8.3 k
        Installing dependencies:
         rke2-common                  x86_64                 1.33.8~rke2r1-0.el9                 rancher-rke2-1.33-stable                    27 M
         rke2-selinux                 noarch                 0.22-1.el9                          rancher-rke2-common-stable                  22 k
      
        Transaction Summary
        ==========================================================================================================================================
        Install  3 Packages
      
        [root@k8s-node2 ~]# TOKEN=K106e21a8fb999718d131eb1dce5e2a55218ca993338cdff4de2c72137794588cb2::server:56d69e65a53934b4010ecd46aafe8722
        [root@k8s-node2 ~]# mkdir -p /etc/rancher/rke2/
        [root@k8s-node2 ~]# cat << EOF > /etc/rancher/rke2/config.yaml
        server: https://192.168.10.11:9345
        token: $TOKEN
        EOF
      
        [root@k8s-node2 ~]# systemctl enable --now rke2-agent.service
        Created symlink /etc/systemd/system/multi-user.target.wants/rke2-agent.service → /usr/lib/systemd/system/rke2-agent.service.
    • 워커노드 정보 확인

        [k8s-node1]
      
        [root@k8s-node1 ~]# kubectl get node -owide
        NAME        STATUS   ROLES                       AGE     VERSION          INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                      KERNEL-VERSION                 CONTAINER-RUNTIME
        k8s-node1   Ready    control-plane,etcd,master   19m     v1.33.8+rke2r1   192.168.10.11   <none>        Rocky Linux 9.6 (Blue Onyx)   5.14.0-570.52.1.el9_6.x86_64   containerd://2.1.5-k3s1
        k8s-node2   Ready    <none>                      7m17s   v1.33.8+rke2r1   192.168.10.12   <none>        Rocky Linux 9.6 (Blue Onyx)   5.14.0-570.52.1.el9_6.x86_64   containerd://2.1.5-k3s1
      
        [root@k8s-node1 ~]# kubectl get pod -n kube-system -owide | grep k8s-node2
        kube-proxy-k8s-node2                         1/1     Running            0             7m48s   192.168.10.12   k8s-node2   <none>           <none>
        rke2-canal-wr4d8                             1/2     CrashLoopBackOff   6 (60s ago)   7m48s   192.168.10.12   k8s-node2   <none>           <none>
      
        [k8s-node2]
        [root@k8s-node2 agent]# tree /var/lib/rancher/rke2 -L 1
        /var/lib/rancher/rke2
        ├── agent
        ├── bin -> /var/lib/rancher/rke2/data/v1.33.8-rke2r1-1b2872361ec5/bin
        ├── data
        └── server
      
        4 directories, 0 files
      
        [root@k8s-node2 agent]# systemctl status rke2-agent.service --no-pager
        ● rke2-agent.service - Rancher Kubernetes Engine v2 (agent)
             Loaded: loaded (/usr/lib/systemd/system/rke2-agent.service; enabled; preset: disabled)
             Active: active (running) since Wed 2026-02-18 14:31:50 KST; 8min ago
               Docs: https://github.com/rancher/rke2#readme
            Process: 6711 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
            Process: 6712 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
           Main PID: 6713 (rke2)
              Tasks: 61
             Memory: 2.3G
                CPU: 53.113s
      
         [root@k8s-node2 agent]# ln -s /var/lib/rancher/rke2/bin/containerd /usr/local/bin/containerd
        ln -s /var/lib/rancher/rke2/bin/crictl /usr/local/bin/crictl
        ln -s /var/lib/rancher/rke2/agent/etc/crictl.yaml /etc/crictl.yaml
  • 워커 노드 삭제 → 워커노드 재추가

    • 워커노드 삭제를 위해 drain 진행

        [root@k8s-node1 ~]# kubectl drain k8s-node2 --ignore-daemonsets --delete-emptydir-data
        node/k8s-node2 cordoned
        Warning: ignoring DaemonSet-managed Pods: kube-system/rke2-canal-wr4d8
        node/k8s-node2 drained
        [root@k8s-node1 ~]# kubectl delete node k8s-node2
        node "k8s-node2" deleted
    • 워커 노드 삭제

        [root@k8s-node2 agent]# systemctl stop rke2-agent
        [root@k8s-node2 agent]# ls -l /usr/bin/rke2*
        -rwxr-xr-x. 1 root root 124432768 Feb 14 04:11 /usr/bin/rke2
        -rwxr-xr-x. 1 root root      3373 Feb 18 02:48 /usr/bin/rke2-killall.sh
        -rwxr-xr-x. 1 root root      5606 Feb 18 02:48 /usr/bin/rke2-uninstall.sh
        [root@k8s-node2 agent]# cat /usr/bin/rke2-uninstall.sh
        #!/bin/sh
        set -ex
      
        # helper function for timestamped logging
        log() {
            echo "[$(date +'%Y-%m-%d %H:%M:%S')] $*"
        }
      
        # helper function for logging error and exiting with a message
        error() {
            log "ERROR: $*" >&2
            exit 1
        }
      
        [root@k8s-node2 agent]# tree /etc/rancher
        /etc/rancher [error opening dir]
      
        0 directories, 0 files
        [root@k8s-node2 agent]# tree /var/lib/rancher
        /var/lib/rancher [error opening dir]
      
    • 워커노드 재추가

        [root@k8s-node2 agent]# curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE="agent" INSTALL_RKE2_CHANNEL=v1.33 sh -
        shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
        [INFO]  using stable RPM repositories
        [INFO]  using 1.33 series from channel stable
        Rancher RKE2 Common (v1.33)                                                                               2.3 kB/s | 659  B     00:00
        Rancher RKE2 1.33 (v1.33)                                                                                 1.5 kB/s | 659  B     00:00
        Dependencies resolved.
        ==========================================================================================================================================
         Package                      Architecture           Version                             Repository                                  Size
        ==========================================================================================================================================
        Installing:
         rke2-agent                   x86_64                 1.33.8~rke2r1-0.el9                 rancher-rke2-1.33-stable                   8.3 k
        Installing dependencies:
         rke2-common                  x86_64                 1.33.8~rke2r1-0.el9                 rancher-rke2-1.33-stable                    27 M
         rke2-selinux                 noarch                 0.22-1.el9                          rancher-rke2-common-stable                  22 k
      
        Transaction Summary
        ==========================================================================================================================================
        Install  3 Packages
      
        [root@k8s-node2 agent]# TOKEN=K106e21a8fb999718d131eb1dce5e2a55218ca993338cdff4de2c72137794588cb2::server:56d69e65a53934b4010ecd46aafe8722
        [root@k8s-node2 agent]# echo $TOKEN
        K106e21a8fb999718d131eb1dce5e2a55218ca993338cdff4de2c72137794588cb2::server:56d69e65a53934b4010ecd46aafe8722
      
        [root@k8s-node2 agent]# cat << EOF > /etc/rancher/rke2/config.yaml
        server: https://192.168.10.11:9345
        token: $TOKEN
        EOF
        [root@k8s-node2 agent]# cat /etc/rancher/rke2/config.yaml
        server: https://192.168.10.11:9345
        token: K106e21a8fb999718d131eb1dce5e2a55218ca993338cdff4de2c72137794588cb2::server:56d69e65a53934b4010ecd46aafe8722
        [root@k8s-node2 agent]# systemctl enable --now rke2-agent.service
        Created symlink /etc/systemd/system/multi-user.target.wants/rke2-agent.service → /usr/lib/systemd/system/rke2-agent.service.
      
        [root@k8s-node2 agent]# systemctl enable --now rke2-agent.service
        Created symlink /etc/systemd/system/multi-user.target.wants/rke2-agent.service → /usr/lib/systemd/system/rke2-agent.service.
        [root@k8s-node2 agent]# journalctl -u rke2-agent -f
  • 샘플앱 배포진행

      [root@k8s-node1 ~]# kubectl get deploy,pod,svc,ep -owide
      Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
      NAME                     READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES           SELECTOR
      deployment.apps/webpod   0/2     2            0           3s    webpod       traefik/whoami   app=webpod
    
      NAME                          READY   STATUS              RESTARTS   AGE   IP       NODE        NOMINATED NODE   READINESS GATES
      pod/webpod-697b545f57-4nnh7   0/1     ContainerCreating   0          3s    <none>   k8s-node2   <none>           <none>
      pod/webpod-697b545f57-tftg7   0/1     Pending             0          3s    <none>   k8s-node1   <none>           <none>
    
      NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE   SELECTOR
      service/kubernetes   ClusterIP   10.43.0.1      <none>        443/TCP        27m   <none>
      service/webpod       NodePort    10.43.182.94   <none>        80:30000/TCP   3s    app=webpod
    
      NAME                   ENDPOINTS            AGE
      endpoints/kubernetes   192.168.10.11:6443   27m
      endpoints/webpod       <none>               3s

업그레이드

  • 인증서 관리 및 수동갱신

    • 인증서 관리 요약

      • 클라이언트/서버 인증서 유효기간: 발급일 기준 365일

      • 자동 갱신 조건:

        • 인증서가 만료되었거나

        • 만료 120일 이내로 남았을 경우

          → RKE2 재시작 시 자동 갱신

      • 자동 갱신 방식: 기존 키를 재사용하여 유효기간만 연장

      • 새 키 + 새 인증서로 교체하려면: rotate 명령어로 수동 교체 필요

      • 만료 120일 이내 진입 시: Kubernetes 이벤트 CertificateExpirationWarning 발생

    • 노드인증서와 만료일 확인

      [[k8s-node1]
      
      [root@k8s-node1 ~]# rke2 certificate check --output table
      INFO[0000] Server detected, checking agent and server certificates
      
      FILENAME                           SUBJECT                             USAGES                  EXPIRES                  RESIDUAL TIME   STATUS

    client-controller.crt system:kube-controller-manager ClientAuth Feb 18, 2027 05:17 UTC 1 year OK
    client-controller.crt rke2-client-ca@1771391877 CertSign Feb 16, 2036 05:17 UTC 10 years OK
    kube-controller-manager.crt kube-controller-manager ServerAuth Feb 18, 2027 05:17 UTC 1 year OK
    kube-controller-manager.crt rke2-server-ca@1771391877 CertSign Feb 16, 2036 05:17 UTC 10 years OK
    client-scheduler.crt system:kube-scheduler ClientAuth Feb 18, 2027 05:17 UTC 1 year OK

    [[k8s-node2]

    [root@k8s-node2 agent]# rke2 certificate check --output table
    INFO[0000] Server detected, checking agent and server certificates

    FILENAME SUBJECT USAGES EXPIRES RESIDUAL TIME STATUS


    client-kube-proxy.crt system:kube-proxy ClientAuth Feb 18, 2027 05:46 UTC 1 year OK
    client-kube-proxy.crt rke2-client-ca@1771391877 CertSign Feb 16, 2036 05:17 UTC 10 years OK
    client-kubelet.crt system:node:k8s-node2 ClientAuth Feb 18, 2027 05:46 UTC 1 year OK
    client-kubelet.crt rke2-client-ca@1771391877 CertSign Feb 16, 2036 05:17 UTC 10 years OK
    serving-kubelet.crt k8s-node2 ServerAuth Feb 18, 2027 05:46 UTC 1 year OK
    serving-kubelet.crt rke2-server-ca@1771391877 CertSign Feb 16, 2036 05:17 UTC 10 years OK
    client-rke2-controller.crt system:rke2-controller ClientAuth Feb 18, 2027 05:46 UTC 1 year OK
    client-rke2-controller.crt rke2-client-ca@1771391877 CertSign Feb 16, 2036 05:17 UTC 10 years OK

    
      - 인증서 수동 교체 : rke2 certificate rotate 명령 사용.
    
          ```bash
          [root@k8s-node1 ~]# systemctl stop rke2-server
    
          [root@k8s-node1 ~]# rke2 certificate rotate
          INFO[0000] Server detected, rotating agent and server certificates
          INFO[0000] Rotating dynamic listener certificate
          INFO[0000] Rotating certificates for rke2-controller
          INFO[0000] Rotating certificates for api-server
          INFO[0000] Rotating certificates for admin
          INFO[0000] Rotating certificates for auth-proxy
          INFO[0000] Rotating certificates for cloud-controller
          INFO[0000] Rotating certificates for etcd
          INFO[0000] Rotating certificates for scheduler
          INFO[0000] Rotating certificates for supervisor
          INFO[0000] Rotating certificates for kube-proxy
          INFO[0000] Rotating certificates for controller-manager
          INFO[0000] Rotating certificates for kubelet
          INFO[0000] Successfully backed up certificates to /var/lib/rancher/rke2/server/tls-1771393856, please restart rke2 server or agent to rotate certificates
    
          [root@k8s-node1 ~]# systemctl start rke2-server
          [root@k8s-node1 ~]# rke2 certificate check --output table
          INFO[0000] Server detected, checking agent and server certificates
    
          FILENAME                           SUBJECT                             USAGES                  EXPIRES                  RESIDUAL TIME   STATUS
          --------                           -------                             ------                  -------                  -------------   ------
          client-auth-proxy.crt              system:auth-proxy                   ClientAuth              Feb 18, 2027 05:52 UTC   1 year          OK
          client-auth-proxy.crt              rke2-request-header-ca@1771391877   CertSign                Feb 16, 2036 05:17 UTC   10 years        OK
          client-rke2-cloud-controller.crt   rke2-cloud-controller-manager       ClientAuth              Feb 18, 2027 05:52 UTC   1 year          OK
          client-rke2-cloud-controller.crt   rke2-client-ca@1771391877           CertSign                Feb 16, 2036 05:17 UTC   10 years        OK
    
          [root@k8s-node1 ~]# diff /etc/rancher/rke2/rke2.yaml ~/.kube/config
          18,19c18,19
          <     client-certificate-data: LS0tLS1CRUdJTiBDRVJUS //중략
    
          >     client-certificate-data: LS0tLS1CRUdJTiBDRVJUS //중
          ```
  • 업그레이드 소개

    1. 업그레이드 정책

      Kubernetes Version Skew Policy 적용

    • RKE2 업그레이드 시 Kubernetes 버전 편차 정책을 따름

    • 마이너 버전을 건너뛰는 업그레이드는 불가

      예시

    • v1.26 → v1.28 직접 업그레이드 불가

    • v1.26 → v1.27 → v1.28 순차 업그레이드 필요

    1. Release Channels

      설치 스크립트 또는 자동 업그레이드 기능 사용 시, 릴리스 채널 기준으로 버전이 선택됨.

      stable (기본값)

    • 운영 환경 권장

    • 커뮤니티 검증 완료

    • Rancher 최신 릴리스와 호환

      latest

    • 최신 기능 테스트 목적

    • 커뮤니티 검증이 충분하지 않을 수 있음

    • Rancher와 호환되지 않을 가능성 존재

      특정 Kubernetes 마이너 버전 채널 (예: v1.34)

    • 해당 마이너 버전 계열의 최신 패치 버전 제공

    • 반드시 안정 릴리스만 의미하지는 않음

    1. 설치 스크립트 주요 변수

      INSTALL_RKE2_VERSION

    • 설치할 RKE2 버전 지정

    • 기본값: stable

      INSTALL_RKE2_TYPE

    • 생성할 systemd 서비스 유형 지정

    • server (기본값)

    • agent

      INSTALL_RKE2_CHANNEL_URL

    • 릴리스 채널 정보를 가져올 URL

    • 기본값:

      https://update.rke2.io/v1-release/channels

INSTALL_RKE2_CHANNEL

- 사용할 릴리스 채널 지정
- 기본값: stable
- 선택 가능: latest, testing

INSTALL_RKE2_METHOD

- 설치 방식 지정
- rpm: RPM 기반 시스템 기본값
- tar: 그 외 시스템 기본값
  • 수동업그레이드

    • 수동업그레이드 절차 : 컨트롤노드를 하나씩 업그레이드 → 워커노드 업그레이드

    • [rke2-server] 컨트롤 플레인 노드 업그레이드

      [root@k8s-node1 ~]# kubectl get node
      NAME        STATUS   ROLES                       AGE   VERSION
      k8s-node1   Ready    control-plane,etcd,master   41m   v1.33.8+rke2r1
      k8s-node2   Ready    <none>                      13m   v1.33.8+rke2r1
      
      [root@k8s-node1 ~]# rke2 --version
      rke2 version v1.33.8+rke2r1 (eb75e3c1774cee5a584259d6fee77eb8cfa9b430)
      go version go1.24.12 X:boringcrypto
      
      [root@k8s-node1 ~]# curl -s https://update.rke2.io/v1-release/channels | jq .data
      
      {
        "id": "v1.34",
        "type": "channel",
        "links": {
          "self": "https://update.rke2.io/v1-release/channels/v1.34"
        },
        "name": "v1.34",
        "latest": "v1.34.4+rke2r1",
        "latestRegexp": "v1\\.34\\..*",
        "excludeRegexp": "^[^+]+-"
      }
      
      [root@k8s-node1 ~]# curl -sfL https://get.rke2.io | INSTALL_RKE2_CHANNEL=v1.34 sh -
      [INFO]  using stable RPM repositories
      [INFO]  using 1.34 series from channel stable
      Importing GPG key 0xE257814A:
      Userid     : "Rancher (CI) <ci@rancher.com>"
      Fingerprint: C8CF F216 4551 26E9 B9C9 18BE 925E A29A E257 814A
      From       : https://rpm.rancher.io/public.key
      Error: Failed to download metadata for repo 'rancher-rke2-1.34-stable': repomd.xml GPG signature verification error: Bad GPG signature
      
      [root@k8s-node1 ~]# rke2 --version
      rke2 version v1.34.4+rke2r1 (c6b97dc03cefec17e8454a6f45b29f4e3d0a81d6)
      go version go1.24.12 X:boringcrypto
      
      [root@k8s-node1 ~]# kubectl get pod -n kube-system --sort-by=.metadata.creationTimestamp | tac
      helm-install-rke2-canal-db69m                0/1     Completed          0              26s
      helm-install-rke2-coredns-6bgr5              0/1     Completed          0              26s
      helm-install-rke2-metrics-server-km25q       1/1     Running            0              26s
      helm-install-rke2-runtimeclasses-qcq8j       1/1     Running            0              26s
      kube-proxy-k8s-node1                         1/1     Running            0              42s
      kube-controller-manager-k8s-node1            1/1     Running            0              44s
      kube-scheduler-k8s-node1                     1/1     Running            0              45s
      etcd-k8s-node1                               1/1     Running            0              49s
      kube-apiserver-k8s-node1                     1/1     Running            0              79s
      rke2-canal-jfzmt                             1/2     CrashLoopBackOff   8 (23s ago)    17m
      kube-proxy-k8s-node2                         1/1     Running            0              17m
      rke2-metrics-server-fdcdf575d-6tb9x          1/1     Running            0              41m
      rke2-canal-vwnxr                             1/2     CrashLoopBackOff   17 (19s ago)   42m
      rke2-coredns-rke2-coredns-559595db99-sgwf7   1/1     Running            0              42m
      
      [root@k8s-node1 ~]# kubectl get node -owide
      NAME        STATUS   ROLES                       AGE   VERSION          INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                      KERNEL-VERSION                 CONTAINER-RUNTIME
      k8s-node1   Ready    control-plane,etcd,master   45m   v1.34.4+rke2r1   192.168.10.11   <none>        Rocky Linux 9.6 (Blue Onyx)   5.14.0-570.52.1.el9_6.x86_64   containerd://2.1.5-k3s1
      k8s-node2   Ready    <none>                      18m   v1.33.8+rke2r1   192.168.10.12   <none>        Rocky Linux 9.6 (Blue Onyx)   5.14.0-570.52.1.el9_6.x86_64   containerd://2.1.5-k3s1
    • [rke2 agent] 워커 노드 업그레이드

        [root@k8s-node2 agent]# rke2 --version
        rke2 version v1.33.8+rke2r1 (eb75e3c1774cee5a584259d6fee77eb8cfa9b430)
        go version go1.24.12 X:boringcrypto
      
        [root@k8s-node2 agent]# curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE=agent INSTALL_RKE2_CHANNEL=v1.34 sh -
        shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory
        [INFO]  using stable RPM repositories
        [INFO]  using 1.34 series from channel stable
        Importing GPG key 0xE257814A:
         Userid     : "Rancher (CI) <ci@rancher.com>"
         Fingerprint: C8CF F216 4551 26E9 B9C9 18BE 925E A29A E257 814A
         From       : https://rpm.rancher.io/public.key
        Error: Failed to download metadata for repo 'rancher-rke2-1.34-stable': repomd.xml GPG signature verification error: Bad GPG signature
        Importing GPG key 0xE257814A:
      
        [root@k8s-node2 agent]# rke2 --version
        rke2 version v1.34.4+rke2r1 (c6b97dc03cefec17e8454a6f45b29f4e3d0a81d6)
        go version go1.24.12 X:boringcrypto
        [root@k8s-node2 agent]# dnf repolist
        repo id                                                              repo name
        appstream                                                            Rocky Linux 9 - AppStream
        baseos                                                               Rocky Linux 9 - BaseOS
        extras                                                               Rocky Linux 9 - Extras
        rancher-rke2-1.34-stable                                             Rancher RKE2 1.34 (v1.34)
        rancher-rke2-common-stable                                           Rancher RKE2 Common (v1.34)
        [root@k8s-node2 agent]# systemctl restart rke2-agent
    • 서버노드 확인

        [root@k8s-node1 ~]# kubectl get node -owide
        NAME        STATUS   ROLES                       AGE   VERSION          INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                      KERNEL-VERSION                 CONTAINER-RUNTIME
        k8s-node1   Ready    control-plane,etcd,master   47m   v1.34.4+rke2r1   192.168.10.11   <none>        Rocky Linux 9.6 (Blue Onyx)   5.14.0-570.52.1.el9_6.x86_64   containerd://2.1.5-k3s1
        k8s-node2   Ready    <none>                      20m   v1.34.4+rke2r1   192.168.10.12   <none>        Rocky Linux 9.6 (Blue Onyx)   5.14.0-570.52.1.el9_6.x86_64   containerd://2.1.5-k3s1
      
        [root@k8s-node1 ~]# kubectl get pod -n kube-system --sort-by=.metadata.creationTimestamp | tac
        kube-proxy-k8s-node2                         1/1     Running            0              33s
        kube-proxy-k8s-node1                         1/1     Running            0              2m38s
        helm-install-rke2-canal-db69m                0/1     Completed          0              3m45s
        helm-install-rke2-coredns-6bgr5              0/1     Completed          0              3m45s
        helm-install-rke2-metrics-server-km25q       1/1     Running            2 (97s ago)    3m45s
        helm-install-rke2-runtimeclasses-qcq8j       1/1     Running            2 (97s ago)    3m45s
        kube-controller-manager-k8s-node1            1/1     Running            0              4m3s
        kube-scheduler-k8s-node1                     1/1     Running            0              4m4s
        etcd-k8s-node1                               1/1     Running            0              4m8s
        kube-apiserver-k8s-node1                     1/1     Running            0              4m38s
        rke2-canal-jfzmt                             1/2     Error              11 (31s ago)   20m
        rke2-metrics-server-fdcdf575d-6tb9x          1/1     Running            0              44m
        rke2-canal-vwnxr                             1/2     CrashLoopBackOff   22 (76s ago)   45m
        rke2-coredns-rke2-coredns-559595db99-sgwf7   1/1     Running            0              45m
        NAME                                         READY   STATUS             RESTARTS       AGE
  • 자동업그레이드 소개

    • Rancher의 System Upgrade Controller를 사용하여 RKE2 클러스터 업그레이드를 관리할 수 있음
    • Kubernetes 네이티브 방식으로 동작하는 업그레이드 구조임
    • Plan 사용자 정의 리소스(CRD)를 통해 선언적으로 업그레이드를 정의함
    • Plan에서 다음 항목을 지정할 수 있음
      • 업그레이드할 노드
      • 적용할 RKE2 버전
      • 업그레이드 정책 및 요구사항
    • Label Selector를 사용하여 업그레이드 대상 노드를 지정함
      • 특정 역할(control-plane, worker) 기준 선택 가능
      • 특정 환경(dev, prod) 기준 선택 가능
      • 특정 노드 그룹 기준 선택 가능
    • 컨트롤러는 Plan 리소스를 지속적으로 모니터링함
    • 조건에 맞는 노드를 자동으로 선택함
    • 선택된 노드에 Kubernetes Job을 생성하여 업그레이드를 수행함
    • Job 기반으로 업그레이드 일정과 실행을 관리함
    • 업그레이드 Job이 성공적으로 완료되면
    • 해당 노드에 완료 상태를 나타내는 레이블을 자동으로 추가함
    • 이를 통해 중복 업그레이드를 방지하고 상태 추적이 가능함
  • 자동업그레이드 수행(v1.34→v1.35)

    • system-upgrade-controller 설치

      [root@k8s-node1 ~]# kubectl apply -f https://github.com/rancher/system-upgrade-controller/releases/latest/download/crd.yaml -f https://github.com/rancher/system-upgrade-controller/releases/latest/download/system-upgrade-controller.yaml
      customresourcedefinition.apiextensions.k8s.io/plans.upgrade.cattle.io created
      namespace/system-upgrade created
      serviceaccount/system-upgrade created
      role.rbac.authorization.k8s.io/system-upgrade-controller created
      clusterrole.rbac.authorization.k8s.io/system-upgrade-controller created
      clusterrole.rbac.authorization.k8s.io/system-upgrade-controller-drainer created
      rolebinding.rbac.authorization.k8s.io/system-upgrade created
      clusterrolebinding.rbac.authorization.k8s.io/system-upgrade created
      clusterrolebinding.rbac.authorization.k8s.io/system-upgrade-drainer created
      configmap/default-controller-env created
      deployment.apps/system-upgrade-controller created
      
      [root@k8s-node1 ~]# kubectl get deploy,pod,cm -n system-upgrade
      \NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE
      deployment.apps/system-upgrade-controller   1/1     1            1           21s
      
      NAME                                             READY   STATUS    RESTARTS   AGE
      pod/system-upgrade-controller-6f9f9b8cf4-lwxwr   1/1     Running   0          21s
      
      NAME                               DATA   AGE
      configmap/default-controller-env   10     21s
      configmap/kube-root-ca.crt         1      22s
      
      [root@k8s-node1 ~]# kubectl get crd | grep upgrade
      plans.upgrade.cattle.io                                 2026-02-18T06:12:32Z
      [root@k8s-node1 ~]# kubectl logs -n system-upgrade -l app.kubernetes.io/name=system-upgrade-controller -f
      W0218 06:12:38.134112       1 client_config.go:667] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
      time="2026-02-18T06:12:38Z" level=info msg="Updating embedded CRD plans.upgrade.cattle.io"
      I0218 06:12:39.607690       1 leaderelection.go:257] attempting to acquire leader lease system-upgrade/system-upgrade-controller...
      I0218 06:12:39.607884       1 event.go:389] "Event occurred" object="k8s-node1" fieldPath="" kind="Node" apiVersion="" type="Normal" reason="Starting" message="system-upgrade-controller v0.18.0 (8271c14) starting leader election for system-upgrade/system-upgrade-controller"
      I0218 06:12:39.625094       1 leaderelection.go:271] successfully acquired lease system-upgrade/system-upgrade-controller
      time="2026-02-18T06:12:39Z" level=info msg="Starting /v1, Kind=Node controller"
      time="2026-02-18T06:12:39Z" level=info msg="Starting /v1, Kind=Secret controller"
      time="2026-02-18T06:12:39Z" level=info msg="Starting batch/v1, Kind=Job controller"
      time="2026-02-18T06:12:39Z" level=info msg="Starting upgrade.cattle.io/v1, Kind=Plan controller"
      I0218 06:12:39.743665       1 event.go:389] "Event occurred" object="k8s-node1" fieldPath="" kind="Node" apiVersion="" type="Normal" reason="Started" message="system-upgrade-controller v0.18.0 (8271c14) running as system-upgrade/system-upgrade-controller"
    • 계획 작성 후 실행 및 확인

        #계획작성시에, latest정상구동안되어 버젼직접지정하여 진행
          serviceAccountName: system-upgrade
          upgrade:
            image: rancher/rke2-upgrade
          version: **v1.35.1+rke2r1**
      
      
      
    [root@k8s-node1 ~]# kubectl get node -owide
    NAME        STATUS   ROLES                       AGE   VERSION          INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                      KERNEL-VERSION                 CONTAINER-RUNTIME
    k8s-node1   Ready    control-plane,etcd,master   54m   v1.34.4+rke2r1   192.168.10.11   <none>        Rocky Linux 9.6 (Blue Onyx)   5.14.0-570.52.1.el9_6.x86_64   containerd://2.1.5-k3s1
    k8s-node2   Ready    <none>                      27m   v1.34.4+rke2r1   192.168.10.12   <none>        Rocky Linux 9.6 (Blue Onyx)   5.14.0-570.52.1.el9_6.x86_64   containerd://2.1.5-k3s1

    [root@k8s-node1 ~]# kubectl get node -owide
    NAME        STATUS   ROLES                       AGE   VERSION          INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                      KERNEL-VERSION                 CONTAINER-RUNTIME
    k8s-node1   Ready    control-plane,etcd,master   70m   v1.35.1+rke2r1   192.168.10.11   <none>        Rocky Linux 9.6 (Blue Onyx)   5.14.0-570.52.1.el9_6.x86_64   containerd://2.1.5-k3s1
    k8s-node2   Ready    <none>                      43m   v1.35.1+rke2r1   192.168.10.12   <none>        Rocky Linux 9.6 (Blue Onyx)   5.14.0-570.52.1.el9_6.x86_64   containerd://2.1.5-k3s1
    ```

Cluster API 실습

  • kind k8s에 관리용 클러스터 설치

      (⎈|kind-myk8s:default) zosys@4:~$ mkdir capi-docker
      (⎈|kind-myk8s:default) zosys@4:~$ cd capi-docker/
      (⎈|kind-myk8s:default) zosys@4:~/capi-docker$
    
      (⎈|N/A:N/A) zosys@4:~/capi-docker$ kind create cluster --name myk8s --image kindest/node:v1.35.0 --config - <<EOF
      kind: Cluster
      apiVersion: kind.x-k8s.io/v1alpha4
      nodes:
      - role: control-plane
        extraMounts:
        - hostPath: /Users/gasida/.orbstack/run/docker.sock  # 대분분 이 값 사용: /var/run/docker.sock
          containerPath: /var/run/docker.sock
        extraPortMappings:
        - containerPort: 30000     # sample
          hostPort: 30000
        - containerPort: 30001     # kube-ops-view
          hostPort: 30001
      EOF
      Creating cluster "myk8s" ...
    
      (⎈|kind-myk8s:N/A) zosys@4:~/capi-docker$ curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.12.2/clusterctl-linux-amd64 -o clusterctl
        % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                       Dload  Upload   Total   Spent    Left  Speed
        0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
      100 68.2M  100 68.2M    0     0  7307k      0  0:00:09  0:00:09 --:--:-- 10.5M
      (⎈|kind-myk8s:N/A) zosys@4:~/capi-docker$ sudo install -o root -g root -m 0755 clusterctl /usr/local/bin/clusterctl
    
      [sudo] password for zosys:
      (⎈|kind-myk8s:N/A) zosys@4:~/capi-docker$ clusterctl version -o json | jq
      {
        "clusterctl": {
          "major": "1",
          "minor": "12",
          "gitVersion": "v1.12.2",
          "gitCommit": "9ee80d1ff529c48ae0b8e022ec01d70ac496e8e5",
          "gitTreeState": "clean",
          "buildDate": "2026-01-20T16:48:19Z",
          "goVersion": "go1.24.12",
          "compiler": "gc",
          "platform": "linux/amd64"
        }
      }
    
      (⎈|kind-myk8s:N/A) zosys@4:~/capi-docker$ kubectl get crd | grep x-k8s
      clusterclasses.cluster.x-k8s.io                              2026-02-18T06:49:12Z
      clusterresourcesetbindings.addons.cluster.x-k8s.io           2026-02-18T06:49:12Z
      clusterresourcesets.addons.cluster.x-k8s.io                  2026-02-18T06:49:12Z
      clusters.cluster.x-k8s.io                                    2026-02-18T06:49:12Z
      devclusters.infrastructure.cluster.x-k8s.io                  2026-02-18T06:49:14Z
    
  • 관리용 매니지먼트 K8S 클러스터 정보 확인

      (⎈|kind-myk8s:N/A) zosys@4:~/capi-docker$ kubectl describe -n capi-system deployment.apps/capi-controller-manager | grep feature-gates
            --feature-gates=MachinePool=true,ClusterTopology=true,RuntimeSDK=false,MachineSetPreflightChecks=true,MachineWaitForVolumeDetachConsiderVolumeAttachments=true,PriorityQueue=false,ReconcilerRateLimiting=false,InPlaceUpdates=false,MachineTaintPropagation=false
      (⎈|kind-myk8s:N/A) zosys@4:~/capi-docker$ kubectl get providers.clusterctl.cluster.x-k8s.io -A
      NAMESPACE                           NAME                    AGE     TYPE                     PROVIDER      VERSION
      capd-system                         infrastructure-docker   2m17s   InfrastructureProvider   docker        v1.12.3
      capi-kubeadm-bootstrap-system       bootstrap-kubeadm       2m18s   BootstrapProvider        kubeadm       v1.12.3
      capi-kubeadm-control-plane-system   control-plane-kubeadm   2m18s   ControlPlaneProvider     kubeadm       v1.12.3
      capi-system                         cluster-api             2m19s   CoreProvider             cluster-api   v1.12.3
      (⎈|kind-myk8s:N/A) zosys@4:~/capi-docker$ kubectl get providers -n capi-system cluster-api -o yaml
      apiVersion: clusterctl.cluster.x-k8s.io/v1alpha3
      kind: Provider
      metadata:
        creationTimestamp: "2026-02-18T06:49:13Z"
        generation: 1
        labels:
          cluster.x-k8s.io/provider: cluster-api
          clusterctl.cluster.x-k8s.io: ""
          clusterctl.cluster.x-k8s.io/core: inventory
        name: cluster-api
        namespace: capi-system
        resourceVersion: "1037"
        uid: 1a5dacb5-8700-46e3-a5d7-344a17bcdc9a
      providerName: cluster-api
      type: CoreProvider
      version: v1.12.3
    
  • cert-manager 확인

      (⎈|kind-myk8s:N/A) zosys@4:~/capi-docker$ kubectl get crd | grep cert
      certificaterequests.cert-manager.io                          2026-02-18T06:48:49Z
      certificates.cert-manager.io                                 2026-02-18T06:48:49Z
      challenges.acme.cert-manager.io                              2026-02-18T06:48:49Z
      clusterissuers.cert-manager.io                               2026-02-18T06:48:49Z
      issuers.cert-manager.io                                      2026-02-18T06:48:49Z
      orders.acme.cert-manager.io                                  2026-02-18T06:48:49Z
      (⎈|kind-myk8s:N/A) zosys@4:~/capi-docker$ kubectl get deploy,pod,svc,ep,cm,secret,sa -n cert-manager
      Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
      NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE
      deployment.apps/cert-manager              1/1     1            1           3m36s
      deployment.apps/cert-manager-cainjector   1/1     1            1           3m36s
      deployment.apps/cert-manager-webhook      1/1     1            1           3m36s
    
      NAME                                           READY   STATUS    RESTARTS   AGE
      pod/cert-manager-598d877b78-d2m7f              1/1     Running   0          3m36s
      pod/cert-manager-cainjector-6b5777d564-pxjt2   1/1     Running   0          3m36s
      pod/cert-manager-webhook-5d9fc6b4ff-q2tzg      1/1     Running   0          3m36s
    
      NAME                              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)            AGE
      service/cert-manager              ClusterIP   10.96.105.201   <none>        9402/TCP           3m37s
      service/cert-manager-cainjector   ClusterIP   10.96.22.45     <none>        9402/TCP           3m37s
      service/cert-manager-webhook      ClusterIP   10.96.245.210   <none>        443/TCP,9402/TCP   3m37s
    
      NAME                                ENDPOINTS                          AGE
      endpoints/cert-manager              10.244.0.7:9402                    3m37s
      endpoints/cert-manager-cainjector   10.244.0.6:9402                    3m37s
      endpoints/cert-manager-webhook      10.244.0.8:10250,10.244.0.8:9402   3m37s
    
      NAME                         DATA   AGE
      configmap/kube-root-ca.crt   1      3m38s
    
      NAME                             TYPE     DATA   AGE
      secret/cert-manager-webhook-ca   Opaque   3      3m22s
    
      NAME                                     AGE
      serviceaccount/cert-manager              3m38s
      serviceaccount/cert-manager-cainjector   3m38s
      serviceaccount/cert-manager-webhook      3m38s
      serviceaccount/default                   3m38s
      (⎈|kind-myk8s:N/A) zosys@4:~/capi-docker$ kubectl get issuers.cert-manager.io -A
      NAMESPACE                           NAME                                           READY   AGE
      capd-system                         capd-selfsigned-issuer                         True    3m17s
      capi-kubeadm-bootstrap-system       capi-kubeadm-bootstrap-selfsigned-issuer       True    3m18s
      capi-kubeadm-control-plane-system   capi-kubeadm-control-plane-selfsigned-issuer   True    3m17s
      capi-system                         capi-selfsigned-issuer                         True    3m19s
      (⎈|kind-myk8s:N/A) zosys@4:~/capi-docker$ kubectl get certificaterequests.cert-manager.io -A -owide
      NAMESPACE                           NAME                                        APPROVED   DENIED   READY   ISSUER                                         REQUESTER                                         STATUS                                         AGE
      capd-system                         capd-serving-cert-1                         True                True    capd-selfsigned-issuer                         system:serviceaccount:cert-manager:cert-manager   Certificate fetched from issuer successfully   3m19s
      capi-kubeadm-bootstrap-system       capi-kubeadm-bootstrap-serving-cert-1       True                True    capi-kubeadm-bootstrap-selfsigned-issuer       system:serviceaccount:cert-manager:cert-manager   Certificate fetched from issuer successfully   3m21s
      capi-kubeadm-control-plane-system   capi-kubeadm-control-plane-serving-cert-1   True                True    capi-kubeadm-control-plane-selfsigned-issuer   system:serviceaccount:cert-manager:cert-manager   Certificate fetched from issuer successfully   3m20s
      capi-system                         capi-serving-cert-1                         True                True    capi-selfsigned-issuer                         system:serviceaccount:cert-manager:cert-manager   Certificate fetched from issuer successfully   3m22s
      (⎈|kind-myk8s:N/A) zosys@4:~/capi-docker$ kubectl get certificates.cert-manager.io -A -owide
      NAMESPACE                           NAME                                      READY   SECRET                                            ISSUER                                         STATUS                                          AGE
      capd-system                         capd-serving-cert                         True    capd-webhook-service-cert                         capd-selfsigned-issuer                         Certificate is up to date and has not expired   3m24s
      capi-kubeadm-bootstrap-system       capi-kubeadm-bootstrap-serving-cert       True    capi-kubeadm-bootstrap-webhook-service-cert       capi-kubeadm-bootstrap-selfsigned-issuer       Certificate is up to date and has not expired   3m25s
      capi-kubeadm-control-plane-system   capi-kubeadm-control-plane-serving-cert   True    capi-kubeadm-control-plane-webhook-service-cert   capi-kubeadm-control-plane-selfsigned-issuer   Certificate is up to date and has not expired   3m24s
      capi-system                         capi-serving-cert                         True    capi-webhook-service-cert                         capi-selfsigned-issuer                         Certificate is up to date and has not expired   3m26s
    
  • 워크로드 클러스터 생성

      (⎈|kind-myk8s:N/A) zosys@4:~/capi-docker$ export SERVICE_CIDR=["10.20.0.0/16"]
      (⎈|kind-myk8s:N/A) zosys@4:~/capi-docker$ export POD_CIDR=["10.10.0.0/16"]
      (⎈|kind-myk8s:N/A) zosys@4:~/capi-docker$ export SERVICE_DOMAIN="myk8s-1.local"
      (⎈|kind-myk8s:N/A) zosys@4:~/capi-docker$ export POD_SECURITY_STANDARD_ENABLED="false"
      (⎈|kind-myk8s:N/A) zosys@4:~/capi-docker$ clusterctl generate cluster capi-quickstart --flavor development \
        --kubernetes-version v1.34.3 \
        --control-plane-machine-count=3 \
        --worker-machine-count=3 \
        > capi-quickstart.yaml
    
      New clusterctl version available: v1.12.2 -> v1.12.3
      sigs.k8s.io/cluster-api
      (⎈|kind-myk8s:N/A) zosys@4:~/capi-docker$ kubectl apply -f capi-quickstart.yaml
      clusterclass.cluster.x-k8s.io/quick-start created
      dockerclustertemplate.
      infrastructure.cluster.x-k8s.io/quick-start-cluster created
      kubeadmcontrolplanetemplate.controlplane.cluster.x-k8s.io/quick-start-control-plane created
      dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-control-plane created
      dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-default-worker-machinetemplate created
      dockermachinepooltemplate.infrastructure.cluster.x-k8s.io/quick-start-default-worker-machinepooltemplate created
      kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/quick-start-default-worker-bootstraptemplate created
      cluster.cluster.x-k8s.io/capi-quickstart created
    
      ##생성 확인 & kubeconfig 자격 증명 & CNI 플러그인 설치 후 확인
    
      (⎈|kind-myk8s:N/A) zosys@4:~/capi-docker$ clusterctl get kubeconfig capi-quickstart > capi-quickstart.kubeconfig
      (⎈|kind-myk8s:N/A) zosys@4:~/capi-docker$ kubectl --kubeconfig=capi-quickstart.kubeconfig apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/calico.yaml
    
      (⎈|kind-myk8s:N/A) zosys@4:~/capi-docker$ docker ps
      CONTAINER ID   IMAGE                  COMMAND                  CREATED          STATUS          PORTS                                                             NAMES
      01fa53373460   kindest/node:v1.35.0   "/usr/local/bin/entr…"   21 minutes ago   Up 21 minutes   0.0.0.0:30000-30001->30000-30001/tcp, 127.0.0.1:38469->6443/tcp   myk8s-control-plane
      098bc8adc657   kindest/node:v1.35.0   "/usr/local/bin/entr…"   3 minutes ago    Up 3 minutes                                                                      capi-quickstart-md-0-67fkn-9ajzs-lcjv4
      4f137h6yy154   kindest/node:v1.35.0   "/usr/local/bin/entr…"  
    
      (⎈|kind-myk8s:N/A) zosys@4:~/capi-docker$ clusterctl describe cluster capi-quickstart
      NAME                                                           REPLICAS AVAILABLE READY UP TO DATE STATUS REASON        SINCE MESSAGE                                                                                         
      Cluster/capi-quickstart                                        6/6      6         6     0          True Available
  • 워크로드 LB확인

      (⎈|kind-myk8s:N/A) zosys@4:~$ docker ps
      CONTAINER ID   IMAGE                                COMMAND                  CREATED             STATUS             PORTS                                                             NAMES
      a8f3c1d9b2e7   kindest/node:v1.34.3                 "/usr/local/bin/entr…"   About an hour ago   Up About an hour   127.0.0.1:32775->6443/tcp                                         capi-quickstart-j9fdm-6zg8v
      b7e2d4a1c9f3   kindest/node:v1.34.3                 "/usr/local/bin/entr…"   About an hour ago   Up About an hour   127.0.0.1:32774->6443/tcp                                         capi-quickstart-j9fdm-27w2s
      c3d9e7f1a2b6   kindest/node:v1.34.3                 "/usr/local/bin/entr…"   About an hour ago   Up About an hour                                                                     capi-quickstart-md-0-p7lv8-t7r9t-nhfpb
      d4a1b8c7e2f9   kindest/node:v1.34.3                 "/usr/local/bin/entr…"   About an hour ago   Up About an hour                                                                     capi-quickstart-md-0-p7lv8-t7r9t-rmcls
      e9f2a7b4c1d8   kindest/node:v1.34.3                 "/usr/local/bin/entr…"   About an hour ago   Up About an hour                                                                     capi-quickstart-md-0-p7lv8-t7r9t-t5ds2
      f1a2b3c4d5e6   kindest/node:v1.34.3                 "/usr/local/bin/entr…"   About an hour ago   Up About an hour   127.0.0.1:32773->6443/tcp                                         capi-quickstart-j9fdm-ggm9z
      9c8b7a6d5e4f   kindest/haproxy:v20230606-42a2262b   "haproxy -W -db -f /…"   About an hour ago   Up About an hour   0.0.0.0:32770->6443/tcp, 0.0.0.0:32771->8404/tcp                  capi-quickstart-lb
      7e6d5c4b3a2f   kindest/node:v1.35.0                 "/usr/local/bin/entr…"   2 hours ago         Up 2 hours         0.0.0.0:30000-30001->30000-30001/tcp, 127.0.0.1:54601->6443/tcp   myk8s-control-plane
    
      (⎈|kind-myk8s:N/A) zosys@4:~$ docker inspect capi-quickstart-lb | jq
      ...
      "Entrypoint": [
        "haproxy",
        "-W",
        "-db",
        "-f",
        "/usr/local/etc/haproxy/haproxy.cfg"
    
      (⎈|kind-myk8s:N/A) zosys@4:~$ curl -sk https://127.0.0.1:32770/version | jq
      {
        "major": "1",
        "minor": "34"
      }
    
      (⎈|kind-myk8s:N/A) zosys@4:~$ cat haproxy.cfg
      global
        log /dev/log local0
        log /dev/log local1 notice
        daemon
        maxconn 100000
    
      resolvers docker
        nameserver dns 127.0.0.11:53
    
      defaults
        log global
        mode tcp
        option dontlognull
        timeout connect 5000
        timeout client 50000
        timeout server 50000
        default-server init-addr none
    
      frontend stats
        mode http
        bind *:8404
        stats enable
        stats uri /stats
        stats refresh 1s
        stats admin if TRUE
    
      frontend control-plane
        bind *:6443
        default_backend kube-apiservers
    
      #############################################
    

'Study > K8S-Deploy' 카테고리의 다른 글

K8S ) 6주차 과제  (1) 2026.02.13
K8S ) 5주차 과제  (0) 2026.02.06
K8S) 4주차 과제  (0) 2026.01.31
K8S)3주차 과제  (0) 2026.01.24
K8S)2주차 과제  (0) 2026.01.15

Kubespray offline 설치

폐쇄망환경에서 정상적인 서비스 제공을 위한 주요 구성 요소

  • NTP 서버 : Kubernetes는 인증서(TLS), etcd, 로그, 토큰 만료 시간 등에서 시간 동기화가 매우 중요하기 때문에 모든 노드 간 동일한 시간 유지가 필수
  • DNS 서버 : Kubernetes 노드 간 통신과 내부 서비스 접근은 FQDN 기반으로 동작하는 경우가 많기 때문에 안정적인 내부 이름 해석 환경이 필요
  • Network Gateway (IGW/NATGW) : 완전 폐쇄망이 아닌 경우 외부 리소스 접근 또는 DMZ 구간과의 통신을 위해 네트워크 경계 및 트래픽 제어 구성이 필요
  • Local (Mirror) YUM/DNF Repository : Kubespray 설치 과정에서 필요한 OS 패키지(container runtime, iptables, socat 등)를 외부 인터넷 없이 설치하기 위해 내부 패키지 저장소가 필요
  • Private Container Image Registry : Kubernetes 구성요소(kube-apiserver, etcd, coredns, CNI 등)와 애플리케이션 이미지를 폐쇄망 환경에서 Pull 하기 위해 내부 컨테이너 이미지 저장소가 필요
  • Helm Artifact Repository : Kubernetes 배포 이후 Helm 기반 애플리케이션(ingress, monitoring, logging 등)을 외부 Helm 저장소 없이 배포하기 위해 내부 Helm 차트 저장소가 필요
  • Private PyPI Mirror : Kubespray(Ansible 기반) 및 Python 애플리케이션의 의존 패키지를 폐쇄망 환경에서 설치하기 위해 내부 Python 패키지 저장소가 필요
  • Private Go Module Proxy : Go 기반 애플리케이션 및 Kubernetes 확장 개발 시 외부 Go 모듈 의존성을 내부에서 관리하기 위해 사설 Go 모듈 프록시가 필요
graph TB
    subgraph InternalServices["내부망 내 필요한 서비스"]
        NTP["⏱️ NTP Server"]
        DNS["🌐 DNS Server"]
        GW["🛜 Network Gateway<br/>(IGW, NATGW)"]
        YUM["📦 Local (Mirror) YUM/DNF Repository"]
        REG["🐳 Private Container (Image) Registry"]
        HELM["🧭 Helm Artifact Repository"]
        PYPI["🐍 Private PyPI Mirror"]
    end

폐쇄망 실습환경구성

NAT GATEWAY

  • 설정 확인

    실습내에 enp0s8 ⇒ 실인터페이스 : enp0s3 enp0s9⇒실인터페이스enp0s8

      root@admin:~# cat /etc/NetworkManager/system-connections/enp0s3.nmconnection
      [connection]
      id=enp0s3
      uuid=74d64498-c95a-435b-9340-81b2e4a7d1f2
      type=ethernet
      interface-name=enp0s3
    
      [ethernet]
    
      [ipv4]
      method=auto
    
      [ipv6]
      addr-gen-mode=eui64
      method=ignore
    
      [proxy]
      root@admin:~# cat /etc/NetworkManager/system-connections/enp0s8.nmconnection
      [connection]
      id=enp0s8
      uuid=60dff4a4-af47-4907-84b1-dc4b1e9a129a
      type=ethernet
      autoconnect-priority=-100
      autoconnect-retries=1
      interface-name=enp0s8
    
      [ethernet]
      mac-address=08:00:27:6d:47:bb
    
      [ipv4]
      method=manual
      addresses=192.168.10.10/24
      gateway=192.168.10.1
    
      [user]
      org.freedesktop.NetworkManager.origin=vagrant
  • 네트워크 기본 설정 : enp0s3 연결 down, enp0s8 디폴트 라우팅

      root@k8s-node1:~# nmcli connection down enp0s3
      Connection 'enp0s3' successfully deactivated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/2)
      root@k8s-node1:~# nmcli connection modify enp0s3 connection.autoconnect no
      root@k8s-node1:~# nmcli connection modify enp0s8 +ipv4.routes "0.0.0.0/0 192.168.10.10 200"
      root@k8s-node1:~# nmcli connection up enp0s8
      Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/5)
      root@k8s-node1:~# ip route
      default via 192.168.10.1 dev enp0s8 proto static metric 100
      default via 192.168.10.10 dev enp0s8 proto static metric 200
      192.168.10.0/24 dev enp0s8 proto kernel scope link src 192.168.10.11 metric 100
    
      root@k8s-node1:~# ping -w 1 -W 1 8.8.8.8
      PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
      ^C
      --- 8.8.8.8 ping statistics ---
      1 packets transmitted, 0 received, 100% packet loss, time 0ms
    
      root@k8s-node1:~# curl www.google.com
      curl: (6) Could not resolve host: www.google.com
      root@k8s-node1:~# cat /etc/resolv.conf
      # Generated by NetworkManager
      root@k8s-node1:~# cat << EOF > /etc/resolv.conf
      nameserver 168.126.63.1
      nameserver 8.8.8.8
      EOF
      root@k8s-node1:~# curl www.google.com
    

NTP 설정

  • [admin]NTP 서버설정

      root@admin:~# systemctl status chronyd.service --no-pager
      ● chronyd.service - NTP client/server
           Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; preset: enabled)
           Active: active (running) since Fri 2026-02-13 21:33:01 KST; 14min ago
       Invocation: d599d3cbe51f4af0b3582d164c7e7afd
             Docs: man:chronyd(8)
                   man:chrony.conf(5)
         Main PID: 753 (chronyd)
            Tasks: 1 (limit: 12339)
           Memory: 4.8M (peak: 5.5M)
              CPU: 110ms
           CGroup: /system.slice/chronyd.service
                   └─753 /usr/sbin/chronyd -F 2
    
      Feb 13 21:33:23 admin chronyd[753]: Source 2001:678:8::123 offline
      Feb 13 21:33:23 admin chronyd[753]: Source 240b:400d:3:3300:aeda:71da:9779:d8f1 offline
      Feb 13 21:33:23 admin chronyd[753]: Source 240b:400d:3:3300:aeda:71da:9779:d4f1 offline
      Feb 13 21:33:23 admin chronyd[753]: Source 2401:c080:1c00:24a1:5400:5ff:fe04:720 offline
      Feb 13 21:33:26 admin chronyd[753]: Source 2001:678:8::123 online
      Feb 13 21:33:26 admin chronyd[753]: Source 240b:400d:3:3300:aeda:71da:9779:d8f1 online
      Feb 13 21:33:26 admin chronyd[753]: Source 240b:400d:3:3300:aeda:71da:9779:d4f1 online
      Feb 13 21:33:26 admin chronyd[753]: Source 2401:c080:1c00:24a1:5400:5ff:fe04:720 online
      Feb 13 21:34:17 admin chronyd[753]: Selected source 175.195.167.194 (2.rocky.pool.ntp.org)
      Feb 13 21:34:54 admin chronyd[753]: Selected source 221.151.118.78 (2.rocky.pool.ntp.org)
      root@admin:~# grep "^[^#]" /etc/chrony.conf
      pool 2.rocky.pool.ntp.org iburst
      sourcedir /run/chrony-dhcp
      driftfile /var/lib/chrony/drift
      makestep 1.0 3
      rtcsync
      ntsdumpdir /var/lib/chrony
      logdir /var/log/chrony
      root@admin:~# chronyc sources -v
    
        .-- Source mode  '^' = server, '=' = peer, '#' = local clock.
       / .- Source state '*' = current best, '+' = combined, '-' = not combined,
      | /             'x' = may be in error, '~' = too variable, '?' = unusable.
      ||                                                 .- xxxx [ yyyy ] +/- zzzz
      ||      Reachability register (octal) -.           |  xxxx = adjusted offset,
      ||      Log2(Polling interval) --.      |          |  yyyy = measured offset,
      ||                                \     |          |  zzzz = estimated error.
      ||                                 |    |           \
      MS Name/IP address         Stratum Poll Reach LastRx Last sample
      ===============================================================================
      ^- 175.210.18.47                 2   6   377     7   -562us[ -562us] +/-   17ms
      ^- mail.innotab.com              3   6   377     8  -1275us[-1275us] +/-   38ms
      ^* time.ravnus.com               2   6   377    33    -73us[+1359ns] +/- 2687us
      ^+ ec2-3-39-176-65.ap-north>     2   6   377     7   +157us[ +157us] +/- 5593us
      root@admin:~# dig +short 2.rocky.pool.ntp.org
      175.195.167.194
      158.247.202.103
      221.151.118.78
      root@admin:~# cp /etc/chrony.conf /etc/chrony.bak
      root@admin:~# cat << EOF > /etc/chrony.conf
      # 외부 한국 공용 NTP 서버 설정
      server pool.ntp.org iburst
      server kr.pool.ntp.org iburst
    
      # 내부망(192.168.10.0/24)에서 이 서버에 접속하여 시간 동기화 허용
      allow 192.168.10.0/24
    
      # 외부망이 끊겼을 때도 로컬 시계를 기준으로 내부망에 시간 제공 (선택 사항)
      local stratum 10
    
      # 로그
      logdir /var/log/chrony
      EOF
      root@admin:~# systemctl restart chronyd.service
      root@admin:~# systemctl status chronyd.service --no-pager
      ● chronyd.service - NTP client/server
           Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; preset: enabled)
           Active: active (running) since Fri 2026-02-13 21:47:46 KST; 3s ago
       Invocation: ed143eef1e0145ba87bd062f11301cad
             Docs: man:chronyd(8)
                   man:chrony.conf(5)
          Process: 5758 ExecStart=/usr/sbin/chronyd $OPTIONS (code=exited, status=0/SUCCESS)
         Main PID: 5760 (chronyd)
            Tasks: 1 (limit: 12339)
           Memory: 1.1M (peak: 2.9M)
              CPU: 34ms
           CGroup: /system.slice/chronyd.service
                   └─5760 /usr/sbin/chronyd -F 2
    
      Feb 13 21:47:46 admin systemd[1]: Starting chronyd.service - NTP client/server...
      Feb 13 21:47:46 admin chronyd[5760]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCF… +DEBUG)
      Feb 13 21:47:46 admin chronyd[5760]: Initial frequency -541.138 ppm
      Feb 13 21:47:46 admin chronyd[5760]: Loaded seccomp filter (level 2)
      Feb 13 21:47:46 admin systemd[1]: Started chronyd.service - NTP client/server.
      Hint: Some lines were ellipsized, use -l to show in full.
      root@admin:~# timedatectl status
                     Local time: Fri 2026-02-13 21:47:52 KST
                 Universal time: Fri 2026-02-13 12:47:52 UTC
                       RTC time: Fri 2026-02-13 12:47:51
                      Time zone: Asia/Seoul (KST, +0900)
      System clock synchronized: yes
                    NTP service: active
                RTC in local TZ: no
      root@admin:~# chronyc sources -v
    
        .-- Source mode  '^' = server, '=' = peer, '#' = local clock.
       / .- Source state '*' = current best, '+' = combined, '-' = not combined,
      | /             'x' = may be in error, '~' = too variable, '?' = unusable.
      ||                                                 .- xxxx [ yyyy ] +/- zzzz
      ||      Reachability register (octal) -.           |  xxxx = adjusted offset,
      ||      Log2(Polling interval) --.      |          |  yyyy = measured offset,
      ||                                \     |          |  zzzz = estimated error.
      ||                                 |    |           \
      MS Name/IP address         Stratum Poll Reach LastRx Last sample
      ===============================================================================
      ^- 121.134.215.104               2   6    17     3    -94us[  -94us] +/- 2838us
      ^* 211.108.117.211               2   6    17     4    -11us[+1053us] +/- 2114us
  • [k8snode] ntp client 설정

      root@k8s-node1:~# timedatectl status
                     Local time: Fri 2026-02-13 21:50:15 KST
                 Universal time: Fri 2026-02-13 12:50:15 UTC
                       RTC time: Fri 2026-02-13 12:50:15
                      Time zone: Asia/Seoul (KST, +0900)
      System clock synchronized: yes
                    NTP service: active
                RTC in local TZ: no
      root@k8s-node1:~# chronyc sources -v
    
        .-- Source mode  '^' = server, '=' = peer, '#' = local clock.
       / .- Source state '*' = current best, '+' = combined, '-' = not combined,
      | /             'x' = may be in error, '~' = too variable, '?' = unusable.
      ||                                                 .- xxxx [ yyyy ] +/- zzzz
      ||      Reachability register (octal) -.           |  xxxx = adjusted offset,
      ||      Log2(Polling interval) --.      |          |  yyyy = measured offset,
      ||                                \     |          |  zzzz = estimated error.
      ||                                 |    |           \
      MS Name/IP address         Stratum Poll Reach LastRx Last sample
      ===============================================================================
      ^- 121.174.142.82                3   7   360   444   +988us[ +988us] +/-   43ms
      ^- 175.195.167.194               3   7    34   509   -309us[ -360us] +/-   35ms
      ^- 121.174.142.81                3   7   340   484   -132us[ -132us] +/-   43ms
      ^* 211.108.117.211               2   6   300   504   -264us[ -316us] +/- 2203us
      root@k8s-node1:~#
      root@k8s-node1:~#
      root@k8s-node1:~# cp /etc/chrony.conf /etc/chrony.bak
      root@k8s-node1:~# cat << EOF > /etc/chrony.conf
      server 192.168.10.10 iburst
      logdir /var/log/chrony
      EOF
      root@k8s-node1:~# systemctl restart chronyd.service
      root@k8s-node1:~# systemctl status chronyd.service --no-pager
      ● chronyd.service - NTP client/server
           Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; preset: enabled)
           Active: active (running) since Fri 2026-02-13 21:52:50 KST; 1s ago
       Invocation: 41174c2716074722b33c8f5c506d64c5
             Docs: man:chronyd(8)
                   man:chrony.conf(5)
          Process: 5662 ExecStart=/usr/sbin/chronyd $OPTIONS (code=exited, status=0/SUCCESS)
         Main PID: 5665 (chronyd)
            Tasks: 1 (limit: 12339)
           Memory: 892K (peak: 2.9M)
              CPU: 30ms
           CGroup: /system.slice/chronyd.service
                   └─5665 /usr/sbin/chronyd -F 2
    
      Feb 13 21:52:50 k8s-node1 systemd[1]: Starting chronyd.service - NTP client/server...
      Feb 13 21:52:50 k8s-node1 chronyd[5665]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP … +DEBUG)
      Feb 13 21:52:50 k8s-node1 chronyd[5665]: Initial frequency -540.965 ppm
      Feb 13 21:52:50 k8s-node1 chronyd[5665]: Loaded seccomp filter (level 2)
      Feb 13 21:52:50 k8s-node1 systemd[1]: Started chronyd.service - NTP client/server.
      Hint: Some lines were ellipsized, use -l to show in full.
      root@k8s-node1:~# timedatectl status
                     Local time: Fri 2026-02-13 21:52:55 KST
                 Universal time: Fri 2026-02-13 12:52:55 UTC
                       RTC time: Fri 2026-02-13 12:52:54
                      Time zone: Asia/Seoul (KST, +0900)
      System clock synchronized: yes
                    NTP service: active
                RTC in local TZ: no
      root@k8s-node1:~# chronyc sources -v
    
        .-- Source mode  '^' = server, '=' = peer, '#' = local clock.
       / .- Source state '*' = current best, '+' = combined, '-' = not combined,
      | /             'x' = may be in error, '~' = too variable, '?' = unusable.
      ||                                                 .- xxxx [ yyyy ] +/- zzzz
      ||      Reachability register (octal) -.           |  xxxx = adjusted offset,
      ||      Log2(Polling interval) --.      |          |  yyyy = measured offset,
      ||                                \     |          |  zzzz = estimated error.
      ||                                 |    |           \
      MS Name/IP address         Stratum Poll Reach LastRx Last sample
      ===============================================================================
      ^* admin                         3   6    17     1    -64us[-2630ns] +/- 5878us
    
      ##어드민 상태확인
      root@admin:~# chronyc clients
      Hostname                      NTP   Drop Int IntL Last     Cmd   Drop Int  Last
      ===============================================================================
      k8s-node2                       6      0   5   -     4       0      0   -     -
      k8s-node1                       4      0   1   -     4       0      0   -     -

DNS설정

  • [admin] DNS 서버(bind) 설정

      root@admin:~# dnf install -y bind bind-utils
      Last metadata expiration check: 0:22:21 ago on Fri 13 Feb 2026 09:33:32 PM KST.
      Package bind-utils-32:9.18.33-4.el10_0.x86_64 is already installed.
      Dependencies resolved.
      ========================================================================================================================
       Package                           Architecture     Version                                   Repository           Size
      ========================================================================================================================
      Installing:
       bind                              x86_64           32:9.18.33-10.el10_1.2                    appstream           333 k
      Upgrading:
       bind-libs                         x86_64           32:9.18.33-10.el10_1.2                    appstream           1.3 M
       bind-license                      noarch           32:9.18.33-10.el10_1.2                    appstream            13 k
       bind-utils                        x86_64           32:9.18.33-10.el10_1.2                    appstream           225 k
       crypto-policies                   noarch           20250905-2.gitc7eb7b2.el10_1.1            baseos               94 k
       crypto-policies-scripts           noarch           20250905-2.gitc7eb7b2.el10_1.1            baseos              134 k
       openssl                           x86_64           1:3.5.1-7.el10_1                          baseos              1.2 M
       openssl-libs                      x86_64           1:3.5.1-7.el10_1                          baseos              2.2 M
      Installing dependencies:
       openssl-fips-provider             x86_64           1:3.5.1-7.el10_1                          baseos              812 k
      Installing weak dependencies:
       bind-dnssec-utils                 x86_64           32:9.18.33-10.el10_1.2                    appstream           151 k
    
      Transaction Summary
      ========================================================================================================================
      Install  3 Packages
    
      root@admin:~# named-checkconf /etc/named.conf
      root@admin:~# systemctl enable --now named
      Created symlink '/etc/systemd/system/multi-user.target.wants/named.service' → '/usr/lib/systemd/system/named.service'.
      root@admin:~#
      root@admin:~# echo "nameserver 192.168.10.10" > /etc/resolv.conf
      root@admin:~# dig +short google.com @192.168.10.10
      172.217.213.139
      172.217.213.113
      172.217.213.138
      172.217.213.100
      172.217.213.102
      172.217.213.101
  • [k8snode] DNS 클라이언트 설정 : NetworkManager DNS 관리 종료

      root@k8s-node1:~# cat /etc/NetworkManager/conf.d/99-dns-none.conf
      cat: /etc/NetworkManager/conf.d/99-dns-none.conf: No such file or directory
      root@k8s-node1:~# cat << EOF > /etc/NetworkManager/conf.d/99-dns-none.conf
      [main]
      dns=none
      EOF
      root@k8s-node1:~# systemctl restart NetworkManager
      root@k8s-node1:~# echo "nameserver 192.168.10.10" > /etc/resolv.conf
      root@k8s-node1:~# dig +short google.com @192.168.10.10
      172.217.213.138
      172.217.213.113
      172.217.213.101
      172.217.213.102
      172.217.213.100
      172.217.213.139

Kubespray Offline 설치소개

Kubespray : Ansible 기반 Kubernetes 배포 도구로, 오프라인(Air-Gap) 환경에서 Kubernetes를 설치할 수 있도록 사전 다운로드·미러링·이미지 관리 편의 기능을 지원

1️⃣ 오프라인 배포 준비 기능

  • Kubernetes 설치 시 필요한 바이너리 파일 목록컨테이너 이미지 목록 자동 생성
  • 필요한 컨테이너 이미지 사전 다운로드
  • 다운로드한 이미지를 내부 프라이빗 레지스트리에 업로드(등록)
  • 필수 파일(바이너리 등) 다운로드 후 Nginx 컨테이너를 통해 내부 HTTP 파일 서버 제공
  • 외부 인터넷이 수행하는 다운로드 역할을 내부 환경에서 대체하도록 지원

2️⃣ Kubespray Offline 동작 흐름

  1. 다운로드 대상 파일 및 이미지 목록 생성
  2. 외부망에서 모든 바이너리·이미지 사전 다운로드
  3. 내부 Registry 및 Web 서버 구성
  4. 내부 노드들은 외부가 아닌 내부 미러/레지스트리에서만 다운로드
  5. Kubernetes 클러스터 배포 수행

기본환경준비

  • download.sh 수행 및 용량확인

      root@admin:~/kubespray-offline# ./download-all.sh
      ###중략##########
      root@admin:~/kubespray-offline# du -sh ~/.venv
      491M    /root/.venv
    
      root@admin:~/kubespray-offline# tree ~/.venv | more
      /root/.venv
      └── 3.12
          ├── bin
          │   ├── activate
          │   ├── activate.csh
          │   ├── activate.fish
          │   ├── Activate.ps1
          │   ├── ansible
          │   ├── ansible-community
          │   ├── ansible-config
          │   ├── ansible-connection
          │   ├── ansible-console
          │   ├── ansible-doc
          │   ├── ansible-galaxy
          │   ├── ansible-inventory
          │   ├── ansible-playbook
          │   ├── ansible-pull
    
      root@admin:~/kubespray-offline# tree /root/kubespray-offline/cache/kubespray-2.30.0/contrib/offline/
      /root/kubespray-offline/cache/kubespray-2.30.0/contrib/offline/
      ├── docker-daemon.json
      ├── generate_list.sh
      ├── generate_list.yml
      ├── manage-offline-container-images.sh
      ├── manage-offline-files.sh
      ├── nginx.conf
      ├── README.md
      ├── registries.conf
      ├── temp
      │   ├── files.list
      │   ├── files.list.template
      │   ├── images.list
      │   └── images.list.template
      └── upload2artifactory.py
    
      2 directories, 13 files
    
      root@admin:~/kubespray-offline# du -sh /root/kubespray-offline/outputs/
      3.7G    /root/kubespray-offline/outputs/
  • [1] outputs 디렉터리 이동 후 setup-container.sh 실행 : 추가로 install-containerd.sh 실행됨

      root@admin:~/kubespray-offline# cd outputs/
    
      root@admin:~/kubespray-offline/outputs# ./setup-container.sh
      ==> Install runc
      ==> Install nerdctl
      nerdctl
      containerd-rootless-setuptool.sh
      containerd-rootless.sh
      ==> Install containerd
      bin/containerd-stress
      bin/containerd
      bin/ctr
      bin/containerd-shim-runc-v2
      ==> Start containerd
      Created symlink '/etc/systemd/system/multi-user.target.wants/containerd.service' → '/etc/systemd/system/containerd.service'.
      ==> Install CNI plugins
      ./
      ./README.md
      ./static
      ./host-device
      ./ipvlan
      ./dhcp
      ./LICENSE
      ./portmap
      ./tap
      ./host-local
      ./vlan
      ./loopback
      ./sbr
      ./firewall
      ./bandwidth
      ./bridge
      ./vrf
      ./macvlan
      ./tuning
      ./dummy
      ./ptp
      ==> Load registry, nginx images
      unpacking docker.io/library/registry:2.8.1 (sha256:1e6c7d1be0dd576c7e50f786e3333382e209907eba72f5414a91025af241e16d)...
      Loaded image: registry:2.8.1
      unpacking docker.io/library/registry:3.0.0 (sha256:09d6d68c85b98bac6699850ad2c071714e01a3bb7f67f68e636b98e4123275d2)...
      Loaded image: registry:3.0.0
      unpacking docker.io/library/nginx:1.28.0-alpine (sha256:dc8e6d3967a06c0c9bb10d16cfc5770686de05da4c34d4224ef2aec61142e8f1)...
      Loaded image: nginx:1.28.0-alpine
      unpacking docker.io/library/nginx:1.29.4 (sha256:93c49ce72e039396ca3c51a43e2703bfc165500d1fe0faa697fb60c9e60fe99f)...
      Loaded image: nginx:1.29.4
    
      root@admin:~/kubespray-offline/outputs# cat /etc/containerd/config.toml
      version = 2
      root = "/var/lib/containerd"
      state = "/run/containerd"
      oom_score = 0
    
      [grpc]
        address = "/run/containerd/containerd.sock"
        uid = 0
        gid = 0
    
      [debug]
        address = "/run/containerd/debug.sock"
        uid = 0
        gid = 0
        level = "info"
    
      [metrics]
        address = ""
        grpc_histogram = false
    
      [cgroup]
        path = ""
    
      [plugins]
        [plugins."io.containerd.grpc.v1.cri".containerd]
          default_runtime_name = "runc"
          snapshotter = "overlayfs"
        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
          runtime_type = "io.containerd.runc.v2"
        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
          systemdCgroup = true
      root@admin:~/kubespray-offline/outputs# cat /etc/systemd/system/containerd.service
      # Copyright The containerd Authors.
      #
      # Licensed under the Apache License, Version 2.0 (the "License");
      # you may not use this file except in compliance with the License.
      # You may obtain a copy of the License at
      #
      #     http://www.apache.org/licenses/LICENSE-2.0
      #
      # Unless required by applicable law or agreed to in writing, software
      # distributed under the License is distributed on an "AS IS" BASIS,
      # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
      # See the License for the specific language governing permissions and
      # limitations under the License.
    
      [Unit]
      Description=containerd container runtime
      Documentation=https://containerd.io
      After=network.target local-fs.target
    
      [Service]
      ExecStartPre=-/sbin/modprobe overlay
      ExecStart=/usr/local/bin/containerd
    
      Type=notify
      Delegate=yes
      KillMode=process
      Restart=always
      RestartSec=5
      # Having non-zero Limit*s causes performance problems due to accounting overhead
      # in the kernel. We recommend using cgroups to do container-local accounting.
      LimitNPROC=infinity
      LimitCORE=infinity
      LimitNOFILE=infinity
      # Comment TasksMax if your systemd version does not supports it.
      # Only systemd 226 and above support this version.
      TasksMax=infinity
      OOMScoreAdjust=-999
    
      [Install]
      WantedBy=multi-user.target
      root@admin:~/kubespray-offline/outputs# systemctl status containerd.service --no-pager
      ● containerd.service - containerd container runtime
           Loaded: loaded (/etc/systemd/system/containerd.service; enabled; preset: disabled)
           Active: active (running) since Fri 2026-02-13 22:30:24 KST; 1min 12s ago
       Invocation: 642c661b243c4e7198228361ee5c4310
             Docs: https://containerd.io
          Process: 19509 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
         Main PID: 19511 (containerd)
            Tasks: 10
           Memory: 634.9M (peak: 636.7M)
              CPU: 3.348s
           CGroup: /system.slice/containerd.service
                   └─19511 /usr/local/bin/containerd
    
      Feb 13 22:30:24 admin containerd[19511]: time="2026-02-13T22:30:24.267143354+09:00" level=info msg="Start strea… server"
      Feb 13 22:30:24 admin containerd[19511]: time="2026-02-13T22:30:24.267151440+09:00" level=info msg="Registered …ith NRI"
      Feb 13 22:30:24 admin containerd[19511]: time="2026-02-13T22:30:24.267158303+09:00" level=info msg="runtime int…g up..."
      Feb 13 22:30:24 admin containerd[19511]: time="2026-02-13T22:30:24.267163092+09:00" level=info msg="starting plugins..."
      Feb 13 22:30:24 admin containerd[19511]: time="2026-02-13T22:30:24.267173832+09:00" level=info msg="Synchronizi…e state"
      Feb 13 22:30:24 admin containerd[19511]: time="2026-02-13T22:30:24.267988368+09:00" level=info msg=serving... a…bug.sock
      Feb 13 22:30:24 admin containerd[19511]: time="2026-02-13T22:30:24.268123675+09:00" level=info msg=serving... a…ck.ttrpc
      Feb 13 22:30:24 admin containerd[19511]: time="2026-02-13T22:30:24.268172639+09:00" level=info msg=serving... a…erd.sock
      Feb 13 22:30:24 admin containerd[19511]: time="2026-02-13T22:30:24.268615622+09:00" level=info msg="containerd …091145s"
      Feb 13 22:30:24 admin systemd[1]: Started containerd.service - containerd container runtime.
      Hint: Some lines were ellipsized, use -l to show in full.
      root@admin:~/kubespray-offline/outputs# nerdctl images
      REPOSITORY    TAG              IMAGE ID        CREATED               PLATFORM       SIZE       BLOB SIZE
      nginx         1.29.4           93c49ce72e03    About a minute ago    linux/amd64    171MB      164.3MB
      nginx         1.28.0-alpine    dc8e6d3967a0    About a minute ago    linux/amd64    51.18MB    49.69MB
      registry      3.0.0            09d6d68c85b9    About a minute ago    linux/amd64    58.44MB    58.26MB
      registry      2.8.1            1e6c7d1be0dd    About a minute ago    linux/amd64    26.65MB    26.49MB
  • [2] start-nginx.sh 실행 : 웹 서버로 files, images, pypi, rpms 제공

      root@admin:~/kubespray-offline/outputs# cp nginx-default.conf nginx-default.bak
      root@admin:~/kubespray-offline/outputs# cat << EOF > nginx-default.conf
      server {
          listen       80;
          listen  [::]:80;
          server_name  localhost;
    
          location / {
              root   /usr/share/nginx/html;
              # index  index.html index.htm;
    
              autoindex on;                 # 디렉터리 목록 표시
              autoindex_exact_size off;     # 파일 크기 KB/MB/GB 단위로 보기 좋게
              autoindex_localtime on;       # 서버 로컬 타임으로 표시
          }
    
          error_page   500 502 503 504  /50x.html;
          location = /50x.html {
              root   /usr/share/nginx/html;
          }
    
          # Force sendfile to off
          sendfile off;
      }
      EOF
      root@admin:~/kubespray-offline/outputs# ./start-nginx.sh
      ===> Stop nginx
      nginx
      nginx
      ===> Start nginx
      f6c4f06c2d2cb8e2c70dca6eb4c4d1e1a2afe3449df46064c99e53695a8da371
      root@admin:~/kubespray-offline/outputs# nerdctl ps
      CONTAINER ID    IMAGE                             COMMAND                   CREATED          STATUS    PORTS    NAMES
      f6c4f06c2d2c    docker.io/library/nginx:1.29.4    "/docker-entrypoint.…"    5 seconds ago    Up                 nginx
      root@admin:~/kubespray-offline/outputs# ss -tnlp | grep nginx
      LISTEN 0      511          0.0.0.0:80         0.0.0.0:*    users:(("nginx",pid=19951,fd=6),("nginx",pid=19950,fd=6),("nginx",pid=19949,fd=6),("nginx",pid=19948,fd=6),("nginx",pid=19912,fd=6))
      LISTEN 0      511             [::]:80            [::]:*    users:(("nginx",pid=19951,fd=7),("nginx",pid=19950,fd=7),("nginx",pid=19949,fd=7),("nginx",pid=19948,fd=7),("nginx",pid=19912,fd=7))

    image.png

  • [3] setup-offline.sh 실행 : offline repo 설정, pypi mirror 전역 설정

      root@admin:~/kubespray-offline/outputs# cat /etc/redhat-release
      Rocky Linux release 10.0 (Red Quartz)
      root@admin:~/kubespray-offline/outputs# ./setup-offline.sh
      /bin/rm: cannot remove '/etc/yum.repos.d/offline.repo': No such file or directory
      ===> Disable all yumrepositories
      ===> Setup local yum repository
      [offline-repo]
      name=Offline repo
      baseurl=http://localhost/rpms/local/
      enabled=1
      gpgcheck=0
      ===> Setup PyPI mirror
      root@admin:~/kubespray-offline/outputs# tree /etc/yum.repos.d/
      /etc/yum.repos.d/
      ├── offline.repo
      ├── rocky-addons.repo.original
      ├── rocky-devel.repo.original
      ├── rocky-extras.repo.original
      └── rocky.repo.original
    
      1 directory, 5 files
    
      root@admin:~/kubespray-offline/outputs# cat ~/.config/pip/pip.conf
      [global]
      index = http://localhost/pypi/
      index-url = http://localhost/pypi/
      trusted-host = localhost
  • [4] setup-py.sh 실행 : offline repo 로 부터 python${PY} 설치 시도 → offline repo 동작 여부 확인

      root@admin:~/kubespray-offline/outputs# ./setup-py.sh
      ===> Install python, venv, etc
      Offline repo                                                                            9.6 MB/s |  85 kB     00:00
      Package python3-3.12.12-3.el10_1.x86_64 is already installed.
      Dependencies resolved.
      Nothing to do.
      Complete!
      root@admin:~/kubespray-offline/outputs# source pyver.sh
      root@admin:~/kubespray-offline/outputs# echo -e "python_version $python${PY}"
      python_version 3.12
      root@admin:~/kubespray-offline/outputs# dnf info python3
      Last metadata expiration check: 0:00:11 ago on Fri 13 Feb 2026 10:34:50 PM KST.
      Installed Packages
      Name         : python3
      Version      : 3.12.12
      Release      : 3.el10_1
      Architecture : x86_64
      Size         : 31 k
      Source       : python3.12-3.12.12-3.el10_1.src.rpm
      Repository   : @System
      From repo    : baseos
      Summary      : Python 3.12 interpreter
      URL          : https://www.python.org/
      License      : Python-2.0.1
  • [5] start-registry.sh 실행 : (컨테이너) 이미지 저장소 컨테이너로 기동

      root@admin:~/kubespray-offline/outputs# ./start-registry.sh
      ===> Start registry
      62e114b329464d54f5c06611b8b2ef55add063a821ef9fc0bfa00d5ca123ff34
      root@admin:~/kubespray-offline/outputs# source config.sh
      root@admin:~/kubespray-offline/outputs# echo -e "registry_port: $REGISTRY_PORT"
      registry_port: 35000
      root@admin:~/kubespray-offline/outputs# nerdctl ps
      CONTAINER ID    IMAGE                               COMMAND                   CREATED          STATUS    PORTS    NAMES
      62e114b32946    docker.io/library/registry:3.0.0    "/entrypoint.sh /etc…"    9 seconds ago    Up                 registry
      f6c4f06c2d2c    docker.io/library/nginx:1.29.4      "/docker-entrypoint.…"    4 minutes ago    Up                 nginx
      root@admin:~/kubespray-offline/outputs# ss -tnlp | grep registry
      LISTEN 0      4096               *:5001             *:*    users:(("registry",pid=20064,fd=7))                          
      LISTEN 0      4096               *:35000            *:*    users:(("registry",pid=20064,fd=3)) 
    
      root@admin:~/kubespray-offline/outputs# curl 192.168.10.10:5001/metrics
      # HELP go_gc_duration_seconds A summary of the wall-time pause (stop-the-world) duration in garbage collection cycles.
      # TYPE go_gc_duration_seconds summary
      go_gc_duration_seconds{quantile="0"} 0.000606569
      go_gc_duration_seconds{quantile="0.25"} 0.000606569
      go_gc_duration_seconds{quantile="0.5"} 0.000680398
      go_gc_duration_seconds{quantile="0.75"} 0.000680398
      go_gc_duration_seconds{quantile="1"} 0.000680398
      go_gc_duration_seconds_sum 0.001286967
      go_gc_duration_seconds_count 2
  • [6] load-push-images.sh 실행 : (컨테이너) 이미지 저장소에 이미지 push

      root@admin:~/kubespray-offline/outputs# echo -e "cpu arch: $IMAGE_ARCH"
      cpu arch: amd64
      root@admin:~/kubespray-offline/outputs# echo -e "Additional container registry hosts: $ADDITIONAL_CONTAINER_REGISTRY_LIST"
      Additional container registry hosts: myregistry.io
      root@admin:~/kubespray-offline/outputs# ls -l images/*.tar.gz
      -rw-r--r--. 1 root root  11194751 Feb 13 22:21 images/docker.io_amazon_aws-alb-ingress-controller-v1.1.9.tar.gz
      -rw-r--r--. 1 root root 175407403 Feb 13 22:22 images/docker.io_amazon_aws-ebs-csi-driver-v0.5.0.tar.gz
      -rw-r--r--. 1 root root 101536170 Feb 13 22:18 images/docker.io_cloudnativelabs_kube-router-v2.1.1.tar.gz
      -rw-r--r--. 1 root root   4735281 Feb 13 22:16 images/docker.io_flannel_flannel-cni-plugin-v1.7.1-flannel1.tar.gz
    
      load_images() {
          for image in $BASEDIR/images/*.tar.gz; do
              echo "===> Loading $image"
              sudo $NERDCTL load --all-platforms -i $image || exit 1
          done
      }
    
      root@admin:~/kubespray-offline/outputs# nerdctl images | more
      REPOSITORY                                               TAG
       IMAGE ID        CREATED               PLATFORM       SIZE       BLOB SIZE
      localhost:35000/kube-proxy                               v1.34.3
       fbe99026b627    34 seconds ago        linux/amd64    75.24MB    73.14MB
      localhost:35000/kube-scheduler                           v1.34.3
       f9e384f4d1e8    35 seconds ago        linux/amd64    55.6MB     53.85MB
      localhost:35000/kube-controller-manager                  v1.34.3
       685d1c802d6c    40 seconds ago        linux/amd64    77.75MB    76MB
      localhost:35000/kube-apiserver                           v1.34.3
       7dd47dd94b4d    46 seconds ago        linux/amd64    90.79MB    89.04MB
      localhost:35000/metallb/controller                       v0.13.9
       b9859bda36a2    51 seconds ago        linux/amd64    64.36MB    64.35MB
      localhost:35000/metallb/speaker                          v0.13.9
       36f24e20f6aa    57 seconds ago        linux/amd64    114MB      114MB
      localhost:35000/kubernetesui/metrics-scraper             v1.0.8
       04131c31ea1c    58 seconds ago        linux/amd64    43.82MB    43.82MB
      localhost:35000/kubernetesui/dashboard                   v2.7.0
    
      root@admin:~/kubespray-offline/outputs# nerdctl images | grep -i kube-proxy
      localhost:35000/kube-proxy                               v1.34.3                                                         fbe99026b627    52 seconds ago        linux/amd64    75.24MB    73.14MB
      registry.k8s.io/kube-proxy                               v1.34.3                                                         fbe99026b627    5 minutes ago         linux/amd64    75.24MB    73.14MB
    
      root@admin:~/kubespray-offline/outputs# nerdctl images | grep localhost | wc -l
      55
      root@admin:~/kubespray-offline/outputs# nerdctl images | grep -v localhost | wc -l
      56
    
      root@admin:~/kubespray-offline/outputs# curl -s http://localhost:35000/v2/_catalog | jq
      {
        "repositories": [
          "amazon/aws-alb-ingress-controller",
          "amazon/aws-ebs-csi-driver",
          "calico/apiserver",
          "calico/cni",
          "calico/kube-controllers",
          "calico/node",
          "calico/typha",
          "cilium/certgen",

    image.png

  • [7] extract-kubespary.sh 실행 : kubespary 저장소 압축 해제

      root@admin:~/kubespray-offline/outputs# ls -lh files/kubespray-*
      -rw-r--r--. 1 root root 2.5M Feb 13 22:06 files/kubespray-2.30.0.tar.gz
      root@admin:~/kubespray-offline/outputs# tree patches/
      patches/
      └── 2.18.0
          ├── 0001-nerdctl-insecure-registry-config-8339.patch
          ├── 0002-Update-config.toml.j2-8340.patch
          └── 0003-generate-list-8537.patch
    
      2 directories, 3 files
      root@admin:~/kubespray-offline/outputs# ./extract-kubespray.sh
      kubespray-2.30.0/
      kubespray-2.30.0/.ansible-lint
      kubespray-2.30.0/.ansible-lint-ignore
      kubespray-2.30.0/.editorconfig
      kubespray-2.30.0/.gitattributes
      kubespray-2.30.0/.github/
      kubespray-2.30.0/.github/ISSUE_TEMPLATE/
      kubespray-2.30.0/.github/ISSUE_TEMPLATE/bug-report.yaml
      kubespray-2.30.0/.github/ISSUE_TEMPLATE/config.yml
      kubespray-2.30.0/.github/ISSUE_TEMPLATE/enhancement.yaml
      kubespray-2.30.0/.github/ISSUE_TEMPLATE/failing-test.yaml
      kubespray-2.30.0/.github/PULL_REQUEST_TEMPLATE.md
      kubespray-2.30.0/.github/dependabot.yml
      kubespray-2.30.0/.github/workflows/
    
      ########## 중략 #################
    
      root@admin:~/kubespray-offline/outputs# tree kubespray-2.30.0/ -L 1
      kubespray-2.30.0/
      ├── ansible.cfg
      ├── CHANGELOG.md
      ├── cluster.yml
      ├── CNAME
      ├── code-of-conduct.md
      ├── _config.yml
      ├── contrib
      ├── CONTRIBUTING.md
      ├── Dockerfile
      ├── docs
      ├── extra_playbooks
      ├── galaxy.yml
      ├── index.html
      ├── inventory
      ├── library
      ├── LICENSE
      ├── logo
      ├── meta
      ├── OWNERS
      ├── OWNERS_ALIASES
      ├── pipeline.Dockerfile
      ├── playbooks
      ├── plugins
      ├── README.md
      ├── recover-control-plane.yml
      ├── RELEASE.md
      ├── remove-node.yml
      ├── remove_node.yml
      ├── requirements.txt
      ├── reset.yml
      ├── roles
      ├── scale.yml
      ├── scripts
      ├── SECURITY_CONTACTS
      ├── test-infra
      ├── tests
      ├── upgrade-cluster.yml
      ├── upgrade_cluster.yml
      └── Vagrantfile
    
      14 directories, 26 files

kubespray 설치

  • kubespray설치

      root@admin:~/kubespray-offline/outputs# python --version
      Python 3.12.12
      root@admin:~/kubespray-offline/outputs# python3.12 -m venv ~/.venv/3.12
      root@admin:~/kubespray-offline/outputs# source ~/.venv/3.12/bin/activate
      ((3.12) ) root@admin:~/kubespray-offline/outputs# which ansible
      /root/.venv/3.12/bin/ansible
      ((3.12) ) root@admin:~/kubespray-offline/outputs# tree ~/.venv/3.12/ -L 4
      /root/.venv/3.12/
      ├── bin
      │   ├── activate
      │   ├── activate.csh
      │   ├── activate.fish
    
      ((3.12) ) root@admin:~/kubespray-offline/outputs/kubespray-2.30.0# ssh k8s-node1 dnf repolist
      repo id                          repo name
      appstream                        Rocky Linux 10 - AppStream
      baseos                           Rocky Linux 10 - BaseOS
      extras                           Rocky Linux 10 - Extras
      offline-repo                     Offline repo for kubespray
    
      ((3.12) ) root@admin:~/kubespray-offline/outputs/kubespray-2.30.0# ssh k8s-node2 dnf repolist
      repo id                          repo name
      offline-repo                     Offline repo for kubespray
      ((3.12) ) root@admin:~/kubespray-offline/outputs/kubespray-2.30.0# ssh k8s-node1 dnf repolist
      repo id                          repo name
      offline-repo                     Offline repo for kubespray
    
      ((3.12) ) root@admin:~/kubespray-offline/outputs/kubespray-2.30.0# ssh k8s-node2 cat /etc/resolv.conf
      nameserver 192.168.10.10
      ((3.12) ) root@admin:~/kubespray-offline/outputs/kubespray-2.30.0# ((3.12) ) root@admin:~/kubespray-offline/outputs/kubespray-2.30.0# ssh k8s-node2 cat /etc/NetworkManager/conf.d/dns.conf
    
      [global-dns-domain-*]
      servers = 10.233.0.3,192.168.10.10
      [global-dns]
      searches = default.svc.cluster.local,svc.cluster.local
      options = ndots:2,timeout:2,attempts:2
    
      ((3.12) ) root@admin:~/kubespray-offline/outputs/kubespray-2.30.0# kubectl get deploy,sts,ds -n kube-system -owide
      NAME                             READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS       IMAGES                                                     SELECTOR
      deployment.apps/coredns          2/2     2            2           3m24s   coredns          192.168.10.10:35000/coredns/coredns:v1.12.1                k8s-app=kube-dns
      deployment.apps/metrics-server   1/1     1            1           3m3s    metrics-server   192.168.10.10:35000/metrics-server/metrics-server:v0.8.0   app.kubernetes.io/name=metrics-server,version=0.8.0
    
      NAME                                     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE     CONTAINERS     IMAGES                                        SELECTOR
      daemonset.apps/kube-flannel              2         2         2       2            2           <none>                   3m41s   kube-flannel   192.168.10.10:35000/flannel/flannel:v0.27.3   app=flannel
      daemonset.apps/kube-flannel-ds-arm       0         0         0       0            0           <none>                   3m41s   kube-flannel   192.168.10.10:35000/flannel/flannel:v0.27.3   app=flannel
      daemonset.apps/kube-flannel-ds-arm64     0         0         0       0            0           <none>                   3m41s   kube-flannel   192.168.10.10:35000/flannel/flannel:v0.27.3   app=flannel
      daemonset.apps/kube-flannel-ds-ppc64le   0         0         0       0            0           <none>                   3m41s   kube-flannel   192.168.10.10:35000/flannel/flannel:v0.27.3   app=flannel
      daemonset.apps/kube-flannel-ds-s390x     0         0         0       0            0           <none>                   3m41s   kube-flannel   192.168.10.10:35000/flannel/flannel:v0.27.3   app=flannel
      daemonset.apps/kube-proxy                2         2         2       2            2           kubernetes.io/os=linux   4m27s   kube-proxy     192.168.10.10:35000/kube-proxy:v1.34.3        k8s-app=kube-proxy
  • [k8s-node] 이미지 저장소 관련 정보 확인

      root@k8s-node1:~# crictl images
      IMAGE                                               TAG                 IMAGE ID            SIZE
      192.168.10.10:35000/coredns/coredns                 v1.12.1             52546a367cc9e       76.1MB
      192.168.10.10:35000/flannel/flannel-cni-plugin      v1.7.1-flannel1     48b5d33f9a21f       11MB
      192.168.10.10:35000/flannel/flannel                 v0.27.3             3475d115f79b6       92.2MB
      192.168.10.10:35000/kube-apiserver                  v1.34.3             aa27095f56193       89MB
      192.168.10.10:35000/kube-controller-manager         v1.34.3             5826b25d990d7       76MB
      192.168.10.10:35000/kube-proxy                      v1.34.3             36eef8e07bdd6       73.1MB
      192.168.10.10:35000/kube-scheduler                  v1.34.3             aec12dadf56dd       53.8MB
      192.168.10.10:35000/metrics-server/metrics-server   v0.8.0              b9e1e3849e070       83.7MB
      192.168.10.10:35000/pause                           3.10.1              cd073f4c5f6a8       739kB
      root@k8s-node1:~# cat /etc/containerd/certs.d/192.168.10.10\:35000/hosts.toml
      server = "https://192.168.10.10:35000"
      [host."http://192.168.10.10:35000"]
        capabilities = ["pull","resolve"]
        skip_verify = true
        override_path = false
  • kubespary-offline 에 kube_version 변경 적용하여 관련 파일 다운로드

      ((3.12) ) root@admin:~/kubespray-offline/outputs/kubespray-2.30.0# cd /root/kubespray-offline
      ((3.12) ) root@admin:~/kubespray-offline# tree cache/kubespray-2.30.0/contrib/offline/temp/
      cache/kubespray-2.30.0/contrib/offline/temp/
      ├── files.list
      ├── files.list.template
      ├── images.list
      └── images.list.template
    
      1 directory, 4 files
      ((3.12) ) root@admin:~/kubespray-offline# mv cache/kubespray-2.30.0/contrib/offline/temp/files.list cache/kubespray-2.30.0/contrib/offline/temp/files-2.list
      ((3.12) ) root@admin:~/kubespray-offline# mv cache/kubespray-2.30.0/contrib/offline/temp/images.list cache/kubespray-2.30.0/contrib/offline/temp/images-2.list
      ((3.12) ) root@admin:~/kubespray-offline# cat download-kubespray-files.sh
      #!/bin/bash
    
      umask 022
    
      source ./config.sh
      source scripts/common.sh
      source scripts/images.sh
    
      KUBESPRAY_DIR=./cache/kubespray-${KUBESPRAY_VERSION}
      if [ ! -e $KUBESPRAY_DIR ]; then
          echo "No kubespray dir at $KUBESPRAY_DIR"
          exit 1
      fi
    
      FILES_DIR=outputs/files
    
      # Decide relative directory of file from URL
      #
      # kubernetes/vx.x.x        : kubeadm/kubectl/kubelet
      # kubernetes/etcd          : etcd
      # kubernetes/cni           : CNI plugins
      # kubernetes/cri-tools     : crictl
      # kubernetes/calico/vx.x.x : calico
      # kubernetes/calico        : calicoctl
      # runc/vx.x.x              : runc
      # cilium-cli/vx.x.x        : cilium-cli
      # gvisor/{ver}/{arch}      : gvisor (sunrc, containerd-shim)
      # scopeo/vx.x.x            : scopeo
      # yq/vx.x.x                : yq
      #
      decide_relative_dir() {
          local url=$1
          local rdir
          rdir=$url
          rdir=$(echo $rdir | sed "s@.*/\(v[0-9.]*\)/.*/kube\(adm\|ctl\|let\)@kubernetes/\1@g")
          rdir=$(echo $rdir | sed "s@.*/etcd-.*.tar.gz@kubernetes/etcd@")
          rdir=$(echo $rdir | sed "s@.*/cni-plugins.*.tgz@kubernetes/cni@")
          rdir=$(echo $rdir | sed "s@.*/crictl-.*.tar.gz@kubernetes/cri-tools@")
          rdir=$(echo $rdir | sed "s@.*/\(v.*\)/calicoctl-.*@kubernetes/calico/\1@")
          rdir=$(echo $rdir | sed "s@.*/\(v.*\)/runc.${IMAGE_ARCH}@runc/\1@")
          rdir=$(echo $rdir | sed "s@.*/\(v.*\)/cilium-linux-.*@cilium-cli/\1@")
          rdir=$(echo $rdir | sed "s@.*/\([^/]*\)/\([^/]*\)/runsc@gvisor/\1/\2@")
          rdir=$(echo $rdir | sed "s@.*/\([^/]*\)/\([^/]*\)/containerd-shim-runsc-v1@gvisor/\1/\2@")
          rdir=$(echo $rdir | sed "s@.*/\(v[^/]*\)/skopeo-linux-.*@skopeo/\1@")
          rdir=$(echo $rdir | sed "s@.*/\(v[^/]*\)/yq_linux_*@yq/\1@")
          if [ "$url" != "$rdir" ]; then
              echo $rdir
              return
          fi
    
          rdir=$(echo $rdir | sed "s@.*/calico/.*@kubernetes/calico@")
          if [ "$url" != "$rdir" ]; then
              echo $rdir
          else
              echo ""
          fi
      }
    
      get_url() {
          url=$1
          filename="${url##*/}"
    
          rdir=$(decide_relative_dir $url)
    
          if [ -n "$rdir" ]; then
              if [ ! -d $FILES_DIR/$rdir ]; then
                  mkdir -p $FILES_DIR/$rdir
              fi
          else
              rdir="."
          fi
    
          if [ ! -e $FILES_DIR/$rdir/$filename ]; then
              echo "==> Download $url"
              for i in {1..3}; do
                  curl --location --show-error --fail --output $FILES_DIR/$rdir/$filename $url && return
                  echo "curl failed. Attempt=$i"
              done
              echo "Download failed, exit : $url"
              exit 1
          else
              echo "==> Skip $url"
          fi
      }
    
      # execute offline generate_list.sh
      generate_list() {
          #if [ $KUBESPRAY_VERSION == "2.18.0" ]; then
          #    export containerd_version=${containerd_version:-1.5.8}
          #    export host_os=linux
          #    export image_arch=amd64
          #fi
          LANG=C /bin/bash ${KUBESPRAY_DIR}/contrib/offline/generate_list.sh || exit 1
    
          #if [ $KUBESPRAY_VERSION == "2.18.0" ]; then
          #    # check roles/download/default/main.yml to decide version
          #    snapshot_controller_tag=${snapshot_controller_tag:-v4.2.1}
          #    sed -i "s@\(.*/snapshot-controller:\)@\1${snapshot_controller_tag}@" ${KUBESPRAY_DIR}/contrib/offline/temp/images.list || exit 1
          #fi
      }
    
      . ./target-scripts/venv.sh
    
      generate_list
    
      mkdir -p $FILES_DIR
    
      cp ${KUBESPRAY_DIR}/contrib/offline/temp/files.list $FILES_DIR/
      cp ${KUBESPRAY_DIR}/contrib/offline/temp/images.list $IMAGES_DIR/
    
      # download files
      files=$(cat ${FILES_DIR}/files.list)
      for i in $files; do
          get_url $i
      done
    
      # download images
      ./download-images.sh || exit 1
      ((3.12) ) root@admin:~/kubespray-offline# cp download-kubespray-files.sh download-kubespray-files.bak
      ((3.12) ) root@admin:~/kubespray-offline# sed -i '/generate_list$/,$ { /generate_list/!d }' download-kubespray-files.sh
      ((3.12) ) root@admin:~/kubespray-offline# diff download-kubespray-files.sh download-kubespray-files.bak
      104a105,118
      >
      > mkdir -p $FILES_DIR
      >
      > cp ${KUBESPRAY_DIR}/contrib/offline/temp/files.list $FILES_DIR/
      > cp ${KUBESPRAY_DIR}/contrib/offline/temp/images.list $IMAGES_DIR/
      >
      > # download files
      > files=$(cat ${FILES_DIR}/files.list)
      > for i in $files; do
      >     get_url $i
      > done
      >
      > # download images
      > ./download-images.sh || exit 1
      ((3.12) ) root@admin:~/kubespray-offline# sed -i 's|offline/generate_list.sh|offline/generate_list.sh -e kube_version=1.33.7|g' download-kubespray-files.sh
      ((3.12) ) root@admin:~/kubespray-offline# cat download-kubespray-files.sh | grep kube_version
          LANG=C /bin/bash ${KUBESPRAY_DIR}/contrib/offline/generate_list.sh -e kube_version=1.33.7 || exit 1
      ((3.12) ) root@admin:~/kubespray-offline# ./download-kubespray-files.sh
      python3 = python3.12
      VENV_DIR = /root/.venv/3.12
      [WARNING]: No inventory was parsed, only implicit localhost is available
      [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match
      'all'
    
      PLAY [Collect container images for offline deployment] *****************************************************************
      Friday 13 February 2026  23:07:52 +0900 (0:00:00.016)       0:00:00.016 *******
      Friday 13 February 2026  23:07:52 +0900 (0:00:00.013)       0:00:00.030 *******
      Friday 13 February 2026  23:07:52 +0900 (0:00:00.015)       0:00:00.045 *******
      Friday 13 February 2026  23:07:52 +0900 (0:00:00.015)       0:00:00.061 *******
      Friday 13 February 2026  23:07:52 +0900 (0:00:00.014)       0:00:00.075 *******
      Friday 13 February 2026  23:07:53 +0900 (0:00:00.012)       0:00:00.088 *******
      Friday 13 February 2026  23:07:53 +0900 (0:00:00.021)       0:00:00.110 *******
      Friday 13 February 2026  23:07:53 +0900 (0:00:00.013)       0:00:00.123 *******
      Friday 13 February 2026  23:07:53 +0900 (0:00:00.016)       0:00:00.139 *******
      Friday 13 February 2026  23:07:53 +0900 (0:00:00.011)       0:00:00.150 *******
      Friday 13 February 2026  23:07:53 +0900 (0:00:00.423)       0:00:00.574 *******
    
      TASK [Collect container images for offline deployment] *****************************************************************
      changed: [localhost] => (item=files)
      changed: [localhost] => (item=images)
    
      PLAY RECAP *************************************************************************************************************
      localhost                  : ok=1    changed=1    unreachable=0    failed=0    skipped=10   rescued=0    ignored=0
    
      Friday 13 February 2026  23:07:54 +0900 (0:00:00.937)       0:00:01.512 *******
      ===============================================================================
      Collect container images for offline deployment ----------------------------------------------------------------- 0.94s
      download : Download | Download files / images ------------------------------------------------------------------- 0.42s
      download : Prep_download | Register docker images info ---------------------------------------------------------- 0.02s
      download : Prep_download | Create local cache for files and images on control node ------------------------------ 0.02s
      download : Prep_download | On localhost, check if passwordless root is possible --------------------------------- 0.02s
      download : Prep_download | On localhost, check if user has access to the container runtime without using sudo --- 0.02s
      download : Prep_download | Parse the outputs of the previous commands ------------------------------------------- 0.01s
      download : Prep_download | Set a few facts ---------------------------------------------------------------------- 0.01s
      download : Prep_download | Create staging directory on remote node ---------------------------------------------- 0.01s
      download : Prep_download | Check that local user is in group or can become root --------------------------------- 0.01s
      download : Download | Get kubeadm binary and list of required images -------------------------------------------- 0.01s
      ((3.12) ) root@admin:~/kubespray-offline# cd cache/kubespray-2.30.0/contrib/offline/temp
      ((3.12) ) root@admin:~/kubespray-offline/cache/kubespray-2.30.0/contrib/offline/temp# diff files-2.list files.list
      1,3c1,3
      < https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubelet
      < https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl
      < https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubeadm
      ---
      > https://dl.k8s.io/release/v1.33.7/bin/linux/amd64/kubelet
      > https://dl.k8s.io/release/v1.33.7/bin/linux/amd64/kubectl
      > https://dl.k8s.io/release/v1.33.7/bin/linux/amd64/kubeadm
      9,10c9,10
      < https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.34.0/crictl-v1.34.0-linux-amd64.tar.gz
      < https://storage.googleapis.com/cri-o/artifacts/cri-o.amd64.v1.34.4.tar.gz
      ---
      > https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.33.0/crictl-v1.33.0-linux-amd64.tar.gz
      > https://storage.googleapis.com/cri-o/artifacts/cri-o.amd64.v1.33.8.tar.gz

    image.png

K8S 관련 폐쇄망 실습

샘플앱배포

  • nginx[alpine]

       ((3.12) ) root@admin:~/kubespray-offline/cache/kubespray-2.30.0/contrib/offline/temp# cat << EOF | kubectl apply -f -
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: nginx
        labels:
          app: nginx
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: nginx
        template:
          metadata:
            labels:
              app: nginx
          spec:
            containers:
              - name: nginx
                image: nginx:alpine   # docker.io/library/nginx:alpine
                ports:
                  - containerPort: 80
      EOF
    
      ##배포 시도해도 실패
       ((3.12) ) root@admin:~/kubespray-offline/cache/kubespray-2.30.0/contrib/offline/temp# kubectl describe pod
        Warning  Failed     8s    kubelet            Failed to pull image "nginx:alpine": rpc error: code = DeadlineExceeded desc = failed to pull and unpack image "docker.io/library/nginx:alpine": failed to resolve image: failed to do request: Head "https://registry-1.docker.io/v2/library/nginx/manifests/alpine": dial tcp 98.95.137.26:443: i/o timeout
        Warning  Failed     8s    kubelet            Error: ErrImagePull
        Normal   BackOff    8s    kubelet            Back-off pulling image "nginx:alpine"
        Warning  Failed     8s    kubelet            Error: ImagePullBackOff
  • (컨테이너) 이미지 저장소에 이미지 push

      ((3.12) ) root@admin:~/kubespray-offline/cache/kubespray-2.30.0/contrib/offline/temp# podman pull nginx:alpine
      ✔ docker.io/library/nginx:alpine
      Trying to pull docker.io/library/nginx:alpine...
      Getting image source signatures
      Copying blob 955a8478f9ac done   |
      Copying blob 589002ba0eae done   |
      Copying blob 3e2c181db1b0 done   |
      Copying blob bca5d04786e1 done   |
      Copying blob 6b7b6c7061b7 done   |
      Copying blob 399d0898a94e done   |
      Copying blob 6d397a54a185 done   |
      Copying blob 5e7756927bef done   |
      Copying config b76de378d5 done   |
      Writing manifest to image destination
      b76de378d57272a1dd9091a05dd548a3639dfb792ebdbf95d06704d2950afdea
      ((3.12) ) root@admin:~/kubespray-offline/cache/kubespray-2.30.0/contrib/offline/temp# podman images | grep nginx
      docker.io/library/nginx                                alpine                                                        b76de378d572  8 days ago     63.5 MB
      docker.io/library/nginx                                1.29.4                                                        248d2326f351  10 days ago    164 MB
      registry.k8s.io/ingress-nginx/controller               v1.13.3                                                       c44d76c3213e  4 months ago   334 MB
      docker.io/library/nginx                                1.28.0-alpine                                                 c318e336065b  9 months ago   49.7 MB
    
      ((3.12) ) root@admin:~# curl -s 192.168.10.10:35000/v2/library/nginx/tags/list | jq
      {
        "name": "library/nginx",
        "tags": [
          "1.28.0-alpine",
          "1.29.4"
        ]
      }
  • kubespary 에 containerd_registries_mirrors values 설정 후 적용 --tags containerd

      ((3.12) ) root@admin:~# cat /root/kubespray-offline/outputs/kubespray-2.30.0/inventory/mycluster/group_vars/all/offline.yml | head -n 15
      #
      # offline.yml sample
      #
    
      http_server: "http://192.168.10.10"
      registry_host: "192.168.10.10:35000"
    
      # Insecure registries for containerd
      containerd_registries_mirrors:
        - prefix: "{{ registry_host }}"
          mirrors:
            - host: "http://{{ registry_host }}"
              capabilities: ["pull", "resolve"]
              skip_verify: true
      ((3.12) ) root@admin:~/kubespray-offline/outputs/kubespray-2.30.0# ansible-playbook -i inventory/mycluster/inventory.ini -v cluster.yml -e kube_version="1.34.3" --tags containerd        
      ((3.12) ) root@admin:~/kubespray-offline/outputs/kubespray-2.30.0# ssh k8s-node2 tree /etc/containerd
      /etc/containerd
      ├── certs.d
      │   ├── 192.168.10.10:35000
      │   │   └── hosts.toml
      │   ├── docker.io
      │   │   └── hosts.toml
      │   ├── quay.io
      │   │   └── hosts.toml
      │   └── registry-1.docker.io
      │       └── hosts.toml
      ├── config.toml
      └── cri-base.json
      ((3.12) ) root@admin:~/kubespray-offline/outputs/kubespray-2.30.0# ssh k8s-node2 cat /etc/containerd/certs.d/quay.io/hosts.toml
      server = "https://quay.io"
      [host."http://192.168.10.10:35000"]
        capabilities = ["pull","resolve"]
        skip_verify = true
        override_path = false

    image.png

헬름 차트 저장소

  • nginx helm 차트 작성, 배포tgz 패키징

      ((3.12) ) root@admin:~/kubespray-offline/outputs/kubespray-2.30.0# podman images
      REPOSITORY                                             TAG                                                           IMAGE ID      CREATED        SIZE
      docker.io/library/nginx                                alpine                                                        b76de378d572  8 days ago     63.5 MB
      192.168.10.10:35000/library/nginx                      alpine     
      ((3.12) ) root@admin:~/kubespray-offline/outputs/kubespray-2.30.0# podman pull nginx:1.28.0-alpine
      Trying to pull docker.io/library/nginx:1.28.0-alpine...     
    
      ((3.12) ) root@admin:~/kubespray-offline/outputs/kubespray-2.30.0# podman tag nginx:1.28.0-alpine 192.168.10.10:35000/library/nginx:1.28.0-alpine
    
      ((3.12) ) root@admin:~/kubespray-offline/outputs/kubespray-2.30.0# curl -s 192.168.10.10:35000/v2/library/nginx/tags/list | jq
      {
        "name": "library/nginx",
        "tags": [
          "1.28.0-alpine",
          "1.29.4",
          "alpine"
        ]
      }               
    
      ##차트작성
      ((3.12) ) root@admin:~/kubespray-offline/outputs/kubespray-2.30.0/nginx-chart# tree
      .
      ├── Chart.yaml
      ├── templates
      │   ├── deployment.yaml
      │   └── service.yaml
      └── values.yaml
    
      2 directories, 4 files
    
      ((3.12) ) root@admin:~/kubespray-offline/outputs/kubespray-2.30.0/nginx-chart# helm package .
      Successfully packaged chart and saved it to: /root/kubespray-offline/outputs/kubespray-2.30.0/nginx-chart/nginx-chart-1.0.0.tgz
      ((3.12) ) root@admin:~/kubespray-offline/outputs/kubespray-2.30.0/nginx-chart# tar -tzf nginx-chart-1.0.0.tgz
      nginx-chart/Chart.yaml
      nginx-chart/values.yaml
      nginx-chart/templates/deployment.yaml
      nginx-chart/templates/service.yaml
      ((3.12) ) root@admin:~/kubespray-offline/outputs/kubespray-2.30.0/nginx-chart# zcat nginx-chart-1.0.0.tgz | tar -xOf - nginx-chart/Chart.yaml
      apiVersion: v2
      appVersion: 1.28.0-alpine
      description: A Helm chart for deploying Nginx with custom index.html
      name: nginx-chart
      type: application
      version: 1.0.0
      ((3.12) ) root@admin:~/kubespray-offline/outputs/kubespray-2.30.0/nginx-chart# helm uninstall dev-nginx
      release "dev-nginx" uninstalled
      ((3.12) ) root@admin:~/kubespray-offline/outputs/kubespray-2.30.0/nginx-chart# helm list
      NAME    NAMESPACE       REVISION        UPDATED STATUS  CHART   APP VERSION
  • 외부 공용 차트 tgz 패키징 다운로드 후 배포 서버에 복사 후 사용

      ((3.12) ) root@admin:~/kubespray-offline/outputs/kubespray-2.30.0/nginx-chart# podman pull docker.io/bitnami/nginx:latest
      Trying to pull docker.io/bitnami/nginx:latest...
      Getting image source signatures
      Copying blob 39137f68a782 done   |
      Copying config f335e22d6b done   |
      Writing manifest to image destination
      f335e22d6b5abdfc5de5c5d8fd54f6af452e0e4cf7c6c003e7ddf1649e2d0387
      ((3.12) ) root@admin:~/kubespray-offline/outputs/kubespray-2.30.0/nginx-chart# podman tag bitnami/nginx:latest 192.168.10.10:35000/bitnami/nginx:latest
      ((3.12) ) root@admin:~/kubespray-offline/outputs/kubespray-2.30.0/nginx-chart# podman push 192.168.10.10:35000/bitnami/nginx:latest
      Getting image source signatures
      Copying blob 83eeb01b5597 done   |
      Copying config f335e22d6b done   |
      Writing manifest to image destination
    
      ((3.12) ) root@admin:~/kubespray-offline/outputs/kubespray-2.30.0/nginx-oci-reg# helm get metadata my-nginx
      NAME: my-nginx
      CHART: nginx
      VERSION: 22.4.7
      APP_VERSION: 1.29.5
      ANNOTATIONS: fips=true,images=- name: git
        version: 2.53.0
        image: registry-1.docker.io/bitnami/git:latest
      - name: nginx
        version: 1.29.5
        image: registry-1.docker.io/bitnami/nginx:latest
      - name: nginx-exporter
        version: 1.5.1
        image: registry-1.docker.io/bitnami/nginx-exporter:latest
      ,licenses=Apache-2.0,tanzuCategory=clusterUtility
      DEPENDENCIES: common
      NAMESPACE: default
      REVISION: 1
      STATUS: deployed
      DEPLOYED_AT: 2026-02-13T23:38:51+09:00
      ((3.12) ) root@admin:~/kubespray-offline/outputs/kubespray-2.30.0/nginx-oci-reg# helm list
      NAME            NAMESPACE       REVISION        UPDATED                                 STATUS          CHART          APP VERSION
      my-nginx        default         1               2026-02-13 23:38:51.442908021 +0900 KST deployed        nginx-22.4.7   1.29.5
      ((3.12) ) root@admin:~/kubespray-offline/outputs/kubespray-2.30.0/nginx-oci-reg# kubectl get deploy -owide
      NAME       READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                                      SELECTOR
      my-nginx   1/1     1            1           14s   nginx        registry-1.docker.io/bitnami/nginx:latest   app.kubernetes.io/instance=my-nginx,app.kubernetes.io/name=nginx
      nginx      1/1     1            1           15m   nginx        nginx:alpine                                app=nginx
      ((3.12) ) root@admin:~/kubespray-offline/outputs/kubespray-2.30.0/nginx-oci-reg# helm get manifest my-nginx | grep 'image:'
                image: registry-1.docker.io/bitnami/nginx:latest
                image: registry-1.docker.io/bitnami/nginx:latest
  • 내부망에 Helm Chart 저장소(ChartMuseum) 구축 후 nginx 차트 업로드 후 사용

      ((3.12) ) root@admin:~/kubespray-offline/outputs/kubespray-2.30.0/nginx-oci-reg# chmod 777 /data/chartmuseum/charts
      ((3.12) ) root@admin:~/kubespray-offline/outputs/kubespray-2.30.0/nginx-oci-reg# podman run -d \
        --name chartmuseum \
        -p 8080:8080 \
        -v /data/chartmuseum/charts:/charts \
        -e STORAGE=local \
        -e STORAGE_LOCAL_ROOTDIR=/charts \
        -e DEBUG=true \
        ghcr.io/helm/chartmuseum:v0.16.4
      Trying to pull ghcr.io/helm/chartmuseum:v0.16.4...
      Getting image source signatures
      Copying blob 589002ba0eae skipped: already exists
      Copying blob aa06029dd384 done   |
      Copying blob c1c9b34da041 done   |
      Copying config 281626b9b5 done   |
      Writing manifest to image destination
      fb47663c6e175287500ddfffbf200aead634d15fb00e719adf8716953d462841
    
      ((3.12) ) root@admin:~/kubespray-offline/outputs/kubespray-2.30.0/nginx-oci-reg# helm repo add internal http://192.168.10.10:8080
      "internal" has been added to your repositories
      ((3.12) ) root@admin:~/kubespray-offline/outputs/kubespray-2.30.0/nginx-oci-reg# helm repo update
      Hang tight while we grab the latest from your chart repositories...
      ...Successfully got an update from the "internal" chart repository
      Update Complete. ⎈Happy Helming!⎈
      ((3.12) ) root@admin:~/kubespray-offline/outputs/kubespray-2.30.0/nginx-oci-reg# helm repo list
      NAME            URL
      internal        http://192.168.10.10:8080
    
      ((3.12) ) root@admin:~/kubespray-offline/outputs/kubespray-2.30.0/nginx-chart# curl -s http://192.168.10.10:8080/api/charts | jq
      {
        "nginx-chart": [
          {
            "name": "nginx-chart",
            "version": "1.0.0",
            "description": "A Helm chart for deploying Nginx with custom index.html",
            "apiVersion": "v2",
            "appVersion": "1.28.0-alpine",
            "type": "application",
            "urls": [
              "charts/nginx-chart-1.0.0.tgz"
            ],
            "created": "2026-02-13T14:44:08.758241429Z",
            "digest": "35ce014a79d3c92287d0784aef47e0fc021f42371141536fbaf410da2271c6fd"
          }
        ]
      }
    
      ((3.12) ) root@admin:~/kubespray-offline/outputs/kubespray-2.30.0/nginx-chart# helm repo update
      Hang tight while we grab the latest from your chart repositories...
      ...Successfully got an update from the "internal" chart repository
      Update Complete. ⎈Happy Helming!⎈
      ((3.12) ) root@admin:~/kubespray-offline/outputs/kubespray-2.30.0/nginx-chart# helm install my-nginx internal/nginx-chart
      NAME: my-nginx
      LAST DEPLOYED: Fri Feb 13 23:44:34 2026
      NAMESPACE: default
      STATUS: deployed
      REVISION: 1
      TEST SUITE: None
      ((3.12) ) root@admin:~/kubespray-offline/outputs/kubespray-2.30.0/nginx-chart# helm list
      NAME            NAMESPACE       REVISION        UPDATED                                 STATUS          CHART                   APP VERSION
      my-nginx        default         1               2026-02-13 23:44:34.773014407 +0900 KST deployed        nginx-chart-1.0.0       1.28.0-alpine

Private PyPI(Python Package Index) Mirror

  • [k8s-node] pip 설정 및 사용

      root@k8s-node1:~# curl http://192.168.10.10/pypi/
      <!DOCTYPE html>
      <html>
        <head>
          <title>Simple index</title>
        </head>
        <body>
          <a href="ansible/index.html">ansible</a>
          <a href="ansible-core/index.html">ansible-core</a>
          <a href="cffi/index.html">cffi</a>
          <a href="cryptography/index.html">cryptography</a>
          <a href="cython/index.html">Cython</a>
          <a href="distro/index.html">distro</a>
          <a href="flit-core/index.html">flit_core</a>
          <a href="jinja2/index.html">Jinja2</a>
          <a href="jmespath/index.html">jmespath</a>
          <a href="markupsafe/index.html">MarkupSafe</a>
          <a href="netaddr/index.html">netaddr</a>
          <a href="packaging/index.html">packaging</a>
          <a href="pip/index.html">pip</a>
          <a href="pycparser/index.html">pycparser</a>
          <a href="pyyaml/index.html">PyYAML</a>
          <a href="resolvelib/index.html">resolvelib</a>
          <a href="ruamel-yaml/index.html">ruamel.yaml</a>
          <a href="selinux/index.html">selinux</a>
          <a href="setuptools/index.html">setuptools</a>
          <a href="wheel/index.html">wheel</a>
        </body>
    
    
  root@k8s-node1:~# pip list | grep -i netaddr
root@k8s-node1:~# pip install netaddr
Looking in indexes: http://192.168.10.10/pypi
Collecting netaddr
  Downloading http://192.168.10.10/pypi/netaddr/netaddr-1.3.0-py3-none-any.whl (2.3 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.3/2.3 MB 46.0 MB/s eta 0:00:00
Installing collected packages: netaddr
Successfully installed netaddr-1.3.0
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
root@k8s-node1:~# pip list | grep -i netaddr
netaddr                   1.3.0

root@k8s-node1:~# pip install httpx
Looking in indexes: http://192.168.10.10/pypi
ERROR: Could not find a version that satisfies the requirement httpx (from versions: none)
ERROR: No matching distribution found for httpx

```
  • pypi 추가패키지설치후 사용

      ((3.12) ) root@admin:~/kubespray-offline/outputs/kubespray-2.30.0/nginx-chart# cat /root/.config/pip/pip.conf
      [global]
      index = http://localhost/pypi/
      index-url = http://localhost/pypi/
      trusted-host = localhost
      ((3.12) ) root@admin:~/kubespray-offline/outputs/kubespray-2.30.0/nginx-chart# mv /root/.config/pip/pip.conf /root/.config/pip/pip.bak
      ((3.12) ) root@admin:~/kubespray-offline/outputs/kubespray-2.30.0/nginx-chart# pip install httpx
      Collecting httpx
        Downloading httpx-0.28.1-py3-none-any.whl.metadata (7.1 kB)
      Collecting anyio (from httpx)
        Downloading anyio-4.12.1-py3-none-any.whl.metadata (4.3 kB)
      Collecting certifi (from httpx)
        Downloading certifi-2026.1.4-py3-none-any.whl.metadata (2.5 kB)
      Collecting httpcore==1.* (from httpx)
        Downloading httpcore-1.0.9-py3-none-any.whl.metadata (21 kB)
      Collecting idna (from httpx)
        Downloading idna-3.11-py3-none-any.whl.metadata (8.4 kB)
      Collecting h11>=0.16 (from httpcore==1.*->httpx)
        Downloading h11-0.16.0-py3-none-any.whl.metadata (8.3 kB)
      Collecting typing_extensions>=4.5 (from anyio->httpx)
        Downloading typing_extensions-4.15.0-py3-none-any.whl.metadata (3.3 kB)
      Downloading httpx-0.28.1-py3-none-any.whl (73 kB)
      Downloading httpcore-1.0.9-py3-none-any.whl (78 kB)
      Downloading h11-0.16.0-py3-none-any.whl (37 kB)
      Downloading anyio-4.12.1-py3-none-any.whl (113 kB)
      Downloading idna-3.11-py3-none-any.whl (71 kB)
      Downloading typing_extensions-4.15.0-py3-none-any.whl (44 kB)
      Downloading certifi-2026.1.4-py3-none-any.whl (152 kB)
      Installing collected packages: typing_extensions, idna, h11, certifi, httpcore, anyio, httpx
      Successfully installed anyio-4.12.1 certifi-2026.1.4 h11-0.16.0 httpcore-1.0.9 httpx-0.28.1 idna-3.11 typing_extensions-4.15.0
      ((3.12) ) root@admin:~/kubespray-offline/outputs/kubespray-2.30.0/nginx-chart# pip list | grep httpx
      httpx              0.28.1
    
      ((3.12) ) root@admin:~/kubespray-offline# curl http://192.168.10.10/pypi/
      <!DOCTYPE html>
      <html>
        <head>
          <title>Simple index</title>
        </head>
        <body>
          <a href="ansible/index.html">ansible</a>
          <a href="ansible-core/index.html">ansible-core</a>
          <a href="cffi/index.html">cffi</a>
          <a href="cryptography/index.html">cryptography</a>
          <a href="cython/index.html">Cython</a>
          <a href="distro/index.html">distro</a>
          <a href="flit-core/index.html">flit_core</a>
          <a href="httpx/index.html">httpx</a>
          <a href="jinja2/index.html">Jinja2</a>
          <a href="jmespath/index.html">jmespath</a>
          <a href="markupsafe/index.html">MarkupSafe</a>
          <a href="netaddr/index.html">netaddr</a>
          <a href="packaging/index.html">packaging</a>
          <a href="pip/index.html">pip</a>
          <a href="pycparser/index.html">pycparser</a>
          <a href="pyyaml/index.html">PyYAML</a>
          <a href="resolvelib/index.html">resolvelib</a>
          <a href="ruamel-yaml/index.html">ruamel.yaml</a>
          <a href="selinux/index.html">selinux</a>
          <a href="setuptools/index.html">setuptools</a>
          <a href="wheel/index.html">wheel</a>
        </body>
      </html>((3.12) ) root@admin:~/kubespray-offline# pip install httpx
      Requirement already satisfied: httpx in /root/.venv/3.12/lib64/python3.12/site-packages (0.28.1)
      Requirement already satisfied: anyio in /root/.venv/3.12/lib64/python3.12/site-packages (from httpx) (4.12.1)
      Requirement already satisfied: certifi in /root/.venv/3.12/lib64/python3.12/site-packages (from httpx) (2026.1.4)
      Requirement already satisfied: httpcore==1.* in /root/.venv/3.12/lib64/python3.12/site-packages (from httpx) (1.0.9)
      Requirement already satisfied: idna in /root/.venv/3.12/lib64/python3.12/site-packages (from httpx) (3.11)
      Requirement already satisfied: h11>=0.16 in /root/.venv/3.12/lib64/python3.12/site-packages (from httpcore==1.*->httpx) (0.16.0)
      Requirement already satisfied: typing_extensions>=4.5 in /root/.venv/3.12/lib64/python3.12/site-packages (from anyio->httpx) (4.15.0)

'Study > K8S-Deploy' 카테고리의 다른 글

K8S ) 7주차 과제  (0) 2026.02.18
K8S ) 5주차 과제  (0) 2026.02.06
K8S) 4주차 과제  (0) 2026.01.31
K8S)3주차 과제  (0) 2026.01.24
K8S)2주차 과제  (0) 2026.01.15

실습 환경 배포

NAME Description CPU RAM NIC1 NIC2 Init Script
admin-lb kubespary 실행, API LB 2 1GB 10.0.2.15 192.168.10.10 admin-lb.sh
k8s-node1 K8S ControlPlane 4 2GB 10.0.2.15 192.168.10.11 init-cfg.sh
k8s-node2 K8S ControlPlane 4 2GB 10.0.2.15 192.168.10.12 init-cfg.sh
k8s-node3 K8S ControlPlane 4 2GB 10.0.2.15 192.168.10.13 init-cfg.sh
k8s-node4 K8S Worker 4 2GB 10.0.2.15 192.168.10.14 init-cfg.sh
k8s-node5 K8S Worker 4 2GB 10.0.2.15 192.168.10.15 init-cfg.sh

1. 실습환경배포

 howoo@ttokkang-ui-MacBookAir  ~/Desktop/work/Gasida_series/k8s-ha-kubespary  curl -O https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/k8s-ha-kubespary/Vagrantfile
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1894  100  1894    0     0   4894      0 --:--:-- --:--:-- --:--:--  4906
 howoo@ttokkang-ui-MacBookAir  ~/Desktop/work/Gasida_series/k8s-ha-kubespary  curl -O https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/k8s-ha-kubespary/admin-lb.sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  6930  100  6930    0     0  10127      0 --:--:-- --:--:-- --:--:-- 10116
 howoo@ttokkang-ui-MacBookAir  ~/Desktop/work/Gasida_series/k8s-ha-kubespary  curl -O https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/k8s-ha-kubespary/init_cfg.sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1709  100  1709    0     0   5381      0 --:--:-- --:--:-- --:--:--  5374

howoo@ttokkang-ui-MacBookAir  ~/Desktop/work/Gasida_series/k8s-ha-kubespary  vagrant up
Bringing machine 'k8s-node1' up with 'virtualbox' provider...
Bringing machine 'k8s-node2' up with 'virtualbox' provider...
Bringing machine 'k8s-node3' up with 'virtualbox' provider...
Bringing machine 'k8s-node4' up with 'virtualbox' provider...
Bringing machine 'k8s-node5' up with 'virtualbox' provider...
Bringing machine 'admin-lb' up with 'virtualbox' provider...
==> k8s-node1: Cloning VM...
==> k8s-node1: Matching MAC address for NAT networking...
==> k8s-node1: Checking if box 'bento/rockylinux-10.0' version '202510.26.0' is up to date...
==> k8s-node1: Setting the name of the VM: k8s-node1
==> k8s-node1: Clearing any previously set network interfaces...
==> k8s-node1: Preparing network interfaces based on configuration...
    k8s-node1: Adapter 1: nat
    k8s-node1: Adapter 2: hostonly
==> k8s-node1: Forwarding ports...
    k8s-node1: 22 (guest) => 60001 (host) (adapter 1)
==> k8s-node1: Running 'pre-boot' VM customizations...
==> k8s-node1: Booting VM...
==> k8s-node1: Waiting for machine to boot. This may take a few minutes...
    k8s-node1: SSH address: 127.0.0.1:60001
    k8s-node1: SSH username: vagrant
    k8s-node1: SSH auth method: private key
                                                                    .
                                                                    .
                                                                    .

    admin-lb: curl: (6) Could not resolve host: raw.githubusercontent.com
    admin-lb: [TASK 16] ETC
    admin-lb: >>>> Initial Config End <<<<

howoo@ttokkang-ui-MacBookAir  ~/Desktop/work/Gasida_series/k8s-ha-kubespary  vagrant status
Current machine states:

k8s-node1                 running (virtualbox)
k8s-node2                 running (virtualbox)
k8s-node3                 running (virtualbox)
k8s-node4                 running (virtualbox)
k8s-node5                 running (virtualbox)
admin-lb                  running (virtualbox)

howoo@ttokkang-ui-MacBookAir ~/ vagrant ssh admin-lb

root@admin-lb:~# sh admin-lb.sh 5
>>>> Initial Config Start <<<<
[TASK 1] Change Timezone and Enable NTP
[TASK 2] Disable firewalld and selinux
[TASK 3] Setting Local DNS Using Hosts file
admin-lb.sh: line 20: ((: i<=: syntax error: operand expected (error token is "<=")
[TASK 4] Delete default routing - enp0s9 NIC
[TASK 5] Install kubectl
                                                                        .
                                                                        .
                                                                        .

+ echo 'sudo su -'
+ echo '>>>> Initial Config End <<<<'
>>>> Initial Config End <<<<

## Ansible로 k8s_node server에 init_cfg.sh 배포
root@admin-lb:~# cat /etc/ansible/hosts 
[k8s_nodes]
k8s-node1 ansible_host=192.168.10.11
k8s-node2 ansible_host=192.168.10.12
k8s-node3 ansible_host=192.168.10.13
k8s-node4 ansible_host=192.168.10.14
k8s-node5 ansible_host=192.168.10.15

[k8s_nodes:vars]
ansible_user=root

ssh-copy-id root@192.168.10.11
ssh-copy-id root@192.168.10.12
ssh-copy-id root@192.168.10.13
ssh-copy-id root@192.168.10.14
ssh-copy-id root@192.168.10.15

ansible k8s_nodes -m copy -a "src=/root/init_cfg.sh dest=/root/init_cfg.sh mode=0755"
ansible k8s_nodes -m shell -a "/root/init_cfg.sh"

TASK [Copy init script] ***************************************************************************************************************************************
ok: [k8s-node1]
ok: [k8s-node3]
ok: [k8s-node5]
ok: [k8s-node4]
ok: [k8s-node2]

TASK [Run init script] ****************************************************************************************************************************************
changed: [k8s-node4]
changed: [k8s-node2]
changed: [k8s-node5]
changed: [k8s-node1]
changed: [k8s-node3]

PLAY RECAP ****************************************************************************************************************************************************
k8s-node1                  : ok=3    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
k8s-node2                  : ok=3    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
k8s-node3                  : ok=3    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
k8s-node4                  : ok=3    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
k8s-node5                  : ok=3    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

# HAProxy 상태 확인
systemctl status haproxy.service --no-pager
journalctl -u haproxy.service --no-pager
**ss -tnlp | grep haproxy**
*LISTEN 0      3000         0.0.0.0:6443       0.0.0.0:*    users:(("haproxy",pid=4915,fd=7))  # k8s api loadbalancer
LISTEN 0      3000         0.0.0.0:9000       0.0.0.0:*    users:(("haproxy",pid=4915,fd=8))  # haproxy stat dashbaord
LISTEN 0      3000         0.0.0.0:8405       0.0.0.0:*    users:(("haproxy",pid=4915,fd=9))  # metrics exporter*

# 통계 페이지 접속
open http://192.168.10.10:9000/haproxy_stats

# (참고) 프로테우스 메트릭 엔드포인트 접속
curl http://192.168.10.10:8405/metrics
                                                                        .
                                                                        .
                                                                        .
haproxy_resolver_truncated{resolver="default",nameserver="0.192.249.101"} 0
haproxy_resolver_truncated{resolver="default",nameserver="8.8.8.8"} 0
# HELP haproxy_resolver_outdated Outdated
# TYPE haproxy_resolver_outdated gauge
haproxy_resolver_outdated{resolver="default",nameserver="0.192.249.101"} 0
haproxy_resolver_outdated{resolver="default",nameserver="8.8.8.8"} 0

  1. kubespray를 통한 k8s 배포
                                                                            .
                                                                            .
                                                                            .
        ]
    },
    "kube_node": {
        "hosts": [
            "k8s-node4"
        ]
    }
}
root@admin-lb:~/kubespray# ansible-inventory -i /root/kubespray/inventory/mycluster/inventory.ini --graph
@all:
  |--@ungrouped:
  |--@etcd:
  |  |--@kube_control_plane:
  |  |  |--k8s-node1
  |  |  |--k8s-node2
  |  |  |--k8s-node3
  |--@kube_node:
  |  |--k8s-node4


root@admin-lb:~/kubespray# sed -i 's|kube_owner: kube|kube_owner: root|g' inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
root@admin-lb:~/kubespray# sed -i 's|kube_network_plugin: calico|kube_network_plugin: flannel|g' inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
root@admin-lb:~/kubespray# sed -i 's|kube_proxy_mode: ipvs|kube_proxy_mode: iptables|g' inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
root@admin-lb:~/kubespray# sed -i 's|enable_nodelocaldns: true|enable_nodelocaldns: false|g' inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
root@admin-lb:~/kubespray# grep -iE 'kube_owner|kube_network_plugin:|kube_proxy_mode|enable_nodelocaldns:' inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
kube_owner: root
kube_network_plugin: flannel
kube_proxy_mode: iptables
enable_nodelocaldns: false
root@admin-lb:~/kubespray# echo "enable_dns_autoscaler: false" >> inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
root@admin-lb:~/kubespray# echo "flannel_interface: enp0s9" >> inventory/mycluster/group_vars/k8s_cluster/k8s-net-flannel.yml
root@admin-lb:~/kubespray# grep "^[^#]" inventory/mycluster/group_vars/k8s_cluster/k8s-net-flannel.yml
flannel_interface: enp0s9
root@admin-lb:~/kubespray# sed -i 's|metrics_server_enabled: false|metrics_server_enabled: true|g' inventory/mycluster/group_vars/k8s_cluster/addons.yml
root@admin-lb:~/kubespray# grep -iE 'metrics_server_enabled:' inventory/mycluster/group_vars/k8s_cluster/addons.yml
metrics_server_enabled: true
root@admin-lb:~/kubespray# echo "metrics_server_requests_cpu: 25m"     >> inventory/mycluster/group_vars/k8s_cluster/addons.yml
root@admin-lb:~/kubespray# echo "metrics_server_requests_memory: 16Mi" >> inventory/mycluster/group_vars/k8s_cluster/addons.yml
root@admin-lb:~/kubespray# cat roles/kubespray_defaults/vars/main/checksums.yml | grep -i kube -A40
kubelet_checksums:
  arm64:
    1.33.7: sha256:3035c44e0d429946d6b4b66c593d371cf5bbbfc85df39d7e2a03c422e4fe404a
    1.33.6: sha256:7d8b7c63309cfe2da2331a1ae13cce070b9ba01e487099e7881a4281667c131d  

root@admin-lb:~/kubespray# ANSIBLE_FORCE_COLOR=true ansible-playbook -i inventory/mycluster/inventory.ini -v cluster.yml -e kube_version="1.32.9" | tee kubespray_install.log
                                                                                .
                                                                                .
                                                                                .
Wednesday 04 February 2026  10:49:23 +0900 (0:00:00.043)       0:08:56.430 **** 
=============================================================================== 
system_packages : Manage packages ------------------------------------- 106.28s
kubernetes/kubeadm : Join to cluster if needed ------------------------- 22.68s
download : Download_file | Download item ------------------------------- 16.14s
download : Download_file | Download item ------------------------------- 15.23s
download : Download_file | Download item ------------------------------- 14.49s
download : Download_file | Download item ------------------------------- 14.25s
container-engine/containerd : Download_file | Download item ------------ 13.63s
download : Download_container | Download image if required ------------- 13.49s
download : Download_container | Download image if required ------------- 13.32s
download : Download_container | Download image if required ------------- 12.83s
download : Download_container | Download image if required ------------- 12.62s
container-engine/crictl : Download_file | Download item ---------------- 11.10s
download : Download_container | Download image if required ------------- 10.18s
container-engine/runc : Download_file | Download item ------------------- 9.95s
download : Download_container | Download image if required -------------- 9.56s
download : Download_container | Download image if required -------------- 9.27s
container-engine/nerdctl : Download_file | Download item ---------------- 9.26s
kubernetes/control-plane : Joining control plane node to the cluster. --- 8.97s
download : Download_file | Download item -------------------------------- 8.25s
download : Download_file | Download item -------------------------------- 7.91s

root@admin-lb:~/kubespray# tree /tmp/
/tmp/
├── k8s-node1
├── k8s-node2
├── k8s-node3
├── k8s-node4
├── k8s-node5
├── k9s_linux_arm64.tar.gz
├── k9s_linux_arm64.tar.gz.1
├── LICENSE
├── README.md
├── systemd-private-0fed2a33a5284513bc7e6d84016170ad-chronyd.service-1oIvH2
│   └── tmp
├── systemd-private-0fed2a33a5284513bc7e6d84016170ad-dbus-broker.service-5V53JE
│   └── tmp
├── systemd-private-0fed2a33a5284513bc7e6d84016170ad-irqbalance.service-Bvj3Gc
│   └── tmp
├── systemd-private-0fed2a33a5284513bc7e6d84016170ad-polkit.service-SBFN4q
│   └── tmp
├── systemd-private-0fed2a33a5284513bc7e6d84016170ad-systemd-logind.service-eYQyW3
│   └── tmp
└── vagrant-shell

11 directories, 10 files
root@admin-lb:~/kubespray# ssh k8s-node1 tree /tmp/releases
/tmp/releases
├── cni-plugins-linux-arm64-1.8.0.tgz
├── containerd-2.1.5-linux-arm64.tar.gz
├── containerd-rootless-setuptool.sh
├── containerd-rootless.sh
├── crictl
├── crictl-1.32.0-linux-arm64.tar.gz
├── etcd-3.5.25-linux-arm64.tar.gz
├── etcd-v3.5.25-linux-arm64
│   ├── Documentation
│   │   ├── dev-guide
│   │   │   └── apispec
│   │   │       └── swagger
│   │   │           ├── rpc.swagger.json
│   │   │           ├── v3election.swagger.json
│   │   │           └── v3lock.swagger.json
│   │   └── README.md
│   ├── etcd
│   ├── etcdctl
│   ├── etcdutl
│   ├── README-etcdctl.md
│   ├── README-etcdutl.md
│   ├── README.md
│   └── READMEv2-etcdctl.md
├── images
├── kubeadm-1.32.9-arm64
├── kubectl-1.32.9-arm64
├── kubelet-1.32.9-arm64
├── nerdctl
├── nerdctl-2.1.6-linux-arm64.tar.gz
└── runc-1.3.4.arm64

7 directories, 24 files

## ETCD backup 확인
root@admin-lb:~/kubespray# for i in {1..3}; do echo ">> k8s-node$i <<"; ssh k8s-node$i tree /var/backups; echo; done
>> k8s-node1 <<
/var/backups
└── etcd-2026-02-04_10:55:41
    ├── member
    │   ├── snap
    │   │   └── db
    │   └── wal
    │       └── 0000000000000000-0000000000000000.wal
    └── snapshot.db

5 directories, 3 files

>> k8s-node2 <<
/var/backups
└── etcd-2026-02-04_10:55:40
    ├── member
    │   ├── snap
    │   │   └── db
    │   └── wal
    │       └── 0000000000000000-0000000000000000.wal
    └── snapshot.db

5 directories, 3 files

>> k8s-node3 <<
/var/backups
└── etcd-2026-02-04_10:55:41
    ├── member
    │   ├── snap
    │   │   └── db
    │   └── wal
    │       └── 0000000000000000-0000000000000000.wal
    └── snapshot.db

5 directories, 3 files

## k8s 호출 확인
root@admin-lb:~/kubespray# for i in {1..3}; do echo ">> k8s-node$i <<"; curl -sk https://192.168.10.1$i:6443/version | grep Version; echo; done
>> k8s-node1 <<
  "gitVersion": "v1.32.9",
  "goVersion": "go1.23.12",

>> k8s-node2 <<
  "gitVersion": "v1.32.9",
  "goVersion": "go1.23.12",

>> k8s-node3 <<
  "gitVersion": "v1.32.9",
  "goVersion": "go1.23.12",

root@admin-lb:~/kubespray# for i in {1..3}; do echo ">> k8s-node$i <<"; curl -sk https://k8s-node$i:6443/version | grep Version; echo; done
>> k8s-node1 <<
  "gitVersion": "v1.32.9",
  "goVersion": "go1.23.12",

>> k8s-node2 <<
  "gitVersion": "v1.32.9",
  "goVersion": "go1.23.12",

>> k8s-node3 <<
  "gitVersion": "v1.32.9",
  "goVersion": "go1.23.12",

  ## k8s 자격증명 확인
root@admin-lb:~/kubespray# for i in {1..3}; do echo ">> k8s-node$i <<"; ssh k8s-node$i kubectl cluster-info -v=6; echo; done
>> k8s-node1 <<
I0204 11:00:47.227034   29567 loader.go:402] Config loaded from file:  /root/.kube/config
I0204 11:00:47.227368   29567 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0204 11:00:47.227378   29567 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0204 11:00:47.227380   29567 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0204 11:00:47.227381   29567 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0204 11:00:47.235211   29567 round_trippers.go:560] GET https://127.0.0.1:6443/api/v1/namespaces/kube-system/services?labelSelector=kubernetes.io%2Fcluster-service%3Dtrue 200 OK in 5 milliseconds
Kubernetes control plane is running at https://127.0.0.1:6443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

>> k8s-node2 <<
I0204 11:00:47.479391   28901 loader.go:402] Config loaded from file:  /root/.kube/config
I0204 11:00:47.479691   28901 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0204 11:00:47.479700   28901 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0204 11:00:47.479703   28901 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0204 11:00:47.479704   28901 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0204 11:00:47.483018   28901 round_trippers.go:560] GET https://127.0.0.1:6443/api?timeout=32s 200 OK in 3 milliseconds
I0204 11:00:47.484053   28901 round_trippers.go:560] GET https://127.0.0.1:6443/apis?timeout=32s 200 OK in 0 milliseconds
I0204 11:00:47.490292   28901 round_trippers.go:560] GET https://127.0.0.1:6443/api/v1/namespaces/kube-system/services?labelSelector=kubernetes.io%2Fcluster-service%3Dtrue 200 OK in 3 milliseconds
Kubernetes control plane is running at https://127.0.0.1:6443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

>> k8s-node3 <<
I0204 11:00:47.714411   28937 loader.go:402] Config loaded from file:  /root/.kube/config
I0204 11:00:47.714828   28937 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0204 11:00:47.714839   28937 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0204 11:00:47.714841   28937 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0204 11:00:47.714843   28937 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0204 11:00:47.721851   28937 round_trippers.go:560] GET https://127.0.0.1:6443/api/v1/namespaces/kube-system/services?labelSelector=kubernetes.io%2Fcluster-service%3Dtrue 200 OK in 5 milliseconds
Kubernetes control plane is running at https://127.0.0.1:6443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

root@admin-lb:~/kubespray# mkdir /root/.kube
root@admin-lb:~/kubespray# scp k8s-node1:/root/.kube/config /root/.kube/
config                                                                                                                       100% 5661     6.7MB/s   00:00    
root@admin-lb:~/kubespray# cat /root/.kube/config | grep server
    server: https://127.0.0.1:6443

root@admin-lb:~/kubespray# ssh k8s-node1 etcdctl.sh member list -w table
+------------------+---------+-------+----------------------------+----------------------------+------------+
|        ID        | STATUS  | NAME  |         PEER ADDRS         |        CLIENT ADDRS        | IS LEARNER |
+------------------+---------+-------+----------------------------+----------------------------+------------+
|  8b0ca30665374b0 | started | etcd3 | https://192.168.10.13:2380 | https://192.168.10.13:2379 |      false |
| 2106626b12a4099f | started | etcd2 | https://192.168.10.12:2380 | https://192.168.10.12:2379 |      false |
| c6702130d82d740f | started | etcd1 | https://192.168.10.11:2380 | https://192.168.10.11:2379 |      false |
+------------------+---------+-------+----------------------------+----------------------------+------------+
root@admin-lb:~/kubespray# for i in {1..3}; do echo ">> k8s-node$i <<"; ssh k8s-node$i etcdctl.sh endpoint status -w table; echo; done
>> k8s-node1 <<
+----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|    ENDPOINT    |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 127.0.0.1:2379 | c6702130d82d740f |  3.5.25 |  6.3 MB |      true |      false |         4 |       3103 |               3103 |        |
+----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

>> k8s-node2 <<
+----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|    ENDPOINT    |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 127.0.0.1:2379 | 2106626b12a4099f |  3.5.25 |  6.3 MB |     false |      false |         4 |       3103 |               3103 |        |
+----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

>> k8s-node3 <<
+----------------+-----------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|    ENDPOINT    |       ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+----------------+-----------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 127.0.0.1:2379 | 8b0ca30665374b0 |  3.5.25 |  6.3 MB |     false |      false |         4 |       3103 |               3103 |        |
+----------------+-----------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

  1. 환경 설정 확인
root@admin-lb:~/kubespray# kubectl get svc -n kube-system coredns
NAME      TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
coredns   ClusterIP   10.233.0.3   <none>        53/UDP,53/TCP,9153/TCP   65m

root@admin-lb:~/kubespray# kubectl get cm -n kube-system kubelet-config -o yaml | grep clusterDNS -A2
    clusterDNS:
    - 10.233.0.3
    clusterDomain: cluster.local

root@admin-lb:~/kubespray# kubectl get svc,ep -n kube-system
NAME                     TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                  AGE
service/coredns          ClusterIP   10.233.0.3     <none>        53/UDP,53/TCP,9153/TCP   66m
service/metrics-server   ClusterIP   10.233.53.50   <none>        443/TCP                  66m

NAME                       ENDPOINTS                                                  AGE
endpoints/coredns          10.233.65.2:53,10.233.66.2:53,10.233.65.2:53 + 3 more...   66m
endpoints/metrics-server   10.233.66.3:10250                                          66m

root@admin-lb:~/kubespray# ssh k8s-node4 curl -sk https://127.0.0.1:6443/version | grep Version
  "gitVersion": "v1.32.9",
  "goVersion": "go1.23.12",

  root@admin-lb:~/kubespray# ssh k8s-node4 ss -tnlp | grep nginx
LISTEN 0      511        127.0.0.1:6443       0.0.0.0:*    users:(("nginx",pid=17935,fd=5),("nginx",pid=17934,fd=5),("nginx",pid=17904,fd=5))
LISTEN 0      511          0.0.0.0:8081       0.0.0.0:*    users:(("nginx",pid=17935,fd=6),("nginx",pid=17934,fd=6),("nginx",pid=17904,fd=6))
  1. Control plane node component → k8s api enpoint 분석
root@admin-lb:~/kubespray# kubectl describe pod -n kube-system kube-apiserver-k8s-node1 | grep -E 'address|secure-port'
Annotations:          kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.10.11:6443
      --advertise-address=192.168.10.11
      --bind-address=::
      --kubelet-preferred-address-types=InternalDNS,InternalIP,Hostname,ExternalDNS,ExternalIP
      --secure-port=6443

root@admin-lb:~/kubespray# kubectl get cm -n kube-system kube-proxy -o yaml | grep server
        server: https://127.0.0.1:6443

  1. External LB → HA control plane node ( 3대 ) + Worker Client-Side LoadBalancing )
root@admin-lb:~/kubespray# helm install kube-ops-view geek-cookbook/kube-ops-view --version 1.2.2 \
  --set service.main.type=NodePort,service.main.ports.http.nodePort=30000 \
  --set env.TZ="Asia/Seoul" --namespace kube-system \
  --set image.repository="abihf/kube-ops-view" --set image.tag="latest"

NAME: kube-ops-view
LAST DEPLOYED: Wed Feb  4 12:13:20 2026
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Get the application URL by running these commands:
  export NODE_PORT=$(kubectl get --namespace kube-system -o jsonpath="{.spec.ports[0].nodePort}" services kube-ops-view)
  export NODE_IP=$(kubectl get nodes --namespace kube-system -o jsonpath="{.items[0].status.addresses[0].address}")
  echo http://$NODE_IP:$NODE_PORT

 root@admin-lb:~/kubespray# kubectl get deploy,pod,svc,ep -n kube-system -l app.kubernetes.io/instance=kube-ops-view
NAME                            READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/kube-ops-view   1/1     1            1           35s

NAME                                 READY   STATUS    RESTARTS   AGE
pod/kube-ops-view-8484bdc5df-cfb22   1/1     Running   0          35s

NAME                    TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
service/kube-ops-view   NodePort   10.233.59.124   <none>        8080:30000/TCP   35s

NAME                      ENDPOINTS          AGE
endpoints/kube-ops-view   10.233.66.4:8080   35s

root@admin-lb:~/kubespray# kubectl get deploy,svc,ep webpod -owide
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES           SELECTOR
deployment.apps/webpod   2/2     2            2           77m   webpod       traefik/whoami   app=webpod

NAME             TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE   SELECTOR
service/webpod   NodePort   10.233.51.23   <none>        80:30003/TCP   77m   app=webpod

NAME               ENDPOINTS                       AGE
endpoints/webpod   10.233.66.5:80,10.233.66.6:80   77m

root@admin-lb:~/kubespray# ssh k8s-node1 curl -s webpod -I
HTTP/1.1 200 OK
Date: Wed, 04 Feb 2026 04:49:40 GMT
Content-Length: 202
Content-Type: text/plain; charset=utf-8

root@admin-lb:~/kubespray# ssh k8s-node1 curl -s webpod.default -I
HTTP/1.1 200 OK
Date: Wed, 04 Feb 2026 04:49:59 GMT
Content-Length: 210
Content-Type: text/plain; charset=utf-8
  1. ( 장애 재현 ) 만약 컨트롤 플레인 1번 노드 장애 발생시 영향도

a. Terminal * 4EA로 모니터링

root@k8s-node1:~# poweroff
root@k8s-node1:~# Connection to k8s-node1 closed by remote host.
Connection to k8s-node1 closed.

root@admin-lb:~/kubespray# while true; do kubectl get node ; echo ; curl -sk https://192.168.10.12:6443/version | grep gitVersion ; sleep 1; echo ; done
Unable to connect to the server: dial tcp 192.168.10.11:6443: connect: no route to host
                                                                        .
                                                                        .
                                                                        .
nable to connect to the server: dial tcp 192.168.10.11:6443: connect: no route to host

  "gitVersion": "v1.32.9",

root@admin-lb:~/kubespray# ^C
root@admin-lb:~/kubespray# sed -i 's/192.168.10.11/192.168.10.12/g' /root/.kube/config
root@admin-lb:~/kubespray# while true; do kubectl get node ; echo ; curl -sk https://192.168.10.12:6443/version | grep gitVersion ; sleep 1; echo ; done
NAME        STATUS     ROLES           AGE    VERSION
k8s-node1   NotReady   control-plane   3h6m   v1.32.9
k8s-node2   Ready      control-plane   3h6m   v1.32.9
k8s-node3   Ready      control-plane   3h6m   v1.32.9
k8s-node4   Ready      <none>          3h6m   v1.32.9

b. VirtualBox에서 k8s-node1 머신 시작

  1. External LB → HA 컨트롤플레인 노드 ( 3대 ) : k8s apiserver 호출 설정
root@admin-lb:~/kubespray# curl -sk https://192.168.10.10:6443/version | grep gitVersion
  "gitVersion": "v1.32.9",
root@admin-lb:~/kubespray# sed -i 's/192.168.10.12/192.168.10.10/g' /root/.kube/config

root@admin-lb:~/kubespray# kubectl get node
E0204 14:08:19.726144   21183 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://192.168.10.10:6443/api?timeout=32s\": tls: failed to verify certificate: x509: certificate is valid for 10.233.0.1, 192.168.10.11, 127.0.0.1, ::1, 192.168.10.12, 192.168.10.13, 10.0.2.15, fd17:625c:f037:2:a00:27ff:fe90:eaeb, not 192.168.10.10"

root@admin-lb:~/kubespray# ansible-playbook -i inventory/mycluster/inventory.ini -v cluster.yml --tags "control-plane" --list-tasks
Using /root/kubespray/ansible.cfg as config file
[WARNING]: Could not match supplied host pattern, ignoring: bastion
[WARNING]: Could not match supplied host pattern, ignoring: k8s_cluster
[WARNING]: Could not match supplied host pattern, ignoring: calico_rr
[WARNING]: Could not match supplied host pattern, ignoring:
_kubespray_needs_etcd
                                                                            .
                                                                            .
                                                                            .
      kubernetes/preinstall : Create kubernetes directories   TAGS: [apps, bootstrap_os, control-plane, kube-apiserver, kube-controller-manager, kubelet, network, node, resolvconf]
      kubernetes/preinstall : Create other directories of root owner   TAGS: [apps, bootstrap_os, control-plane, kube-apiserver, kube-controller-manager, kubelet, network, node, resolvconf]

**ansible-playbook -i inventory/mycluster/inventory.ini -v cluster.yml --tags "control-plane" --limit kube_control_plane -e kube_version="1.32.9"
                                                                            .
                                                                            .
                                                                            .
Gather necessary facts (network) ---------------------------------------- 0.23s
kubernetes/control-plane : Kubeadm | aggregate all SANs ----------------- 0.22s
etcd : Gen_certs | Get etcd certificate serials ------------------------- 0.19s

root@admin-lb:~/kubespray# kubectl get node -v=6
I0204 14:11:19.264116   23169 loader.go:402] Config loaded from file:  /root/.kube/config
I0204 14:11:19.264614   23169 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0204 14:11:19.264640   23169 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0204 14:11:19.264684   23169 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0204 14:11:19.264700   23169 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0204 14:11:19.272759   23169 round_trippers.go:560] GET https://192.168.10.10:6443/api/v1/nodes?limit=500 200 OK in 5 milliseconds
NAME        STATUS   ROLES           AGE     VERSION
k8s-node1   Ready    control-plane   3h15m   v1.32.9
k8s-node2   Ready    control-plane   3h15m   v1.32.9
k8s-node3   Ready    control-plane   3h15m   v1.32.9
k8s-node4   Ready    <none>          3h14m   v1.32.9**
  1. 노드 관리

a. 노드 추가

root@admin-lb:~/kubespray# cat << EOF > /root/kubespray/inventory/mycluster/inventory.ini
[kube_control_plane]
k8s-node1 ansible_host=192.168.10.11 ip=192.168.10.11 etcd_member_name=etcd1
k8s-node2 ansible_host=192.168.10.12 ip=192.168.10.12 etcd_member_name=etcd2
k8s-node3 ansible_host=192.168.10.13 ip=192.168.10.13 etcd_member_name=etcd3

[etcd:children]
kube_control_plane

[kube_node]
k8s-node4 ansible_host=192.168.10.14 ip=192.168.10.14
k8s-node5 ansible_host=192.168.10.15 ip=192.168.10.15
EOF
root@admin-lb:~/kubespray# ansible-inventory -i /root/kubespray/inventory/mycluster/inventory.ini --graph
@all:
  |--@ungrouped:
  |--@etcd:
  |  |--@kube_control_plane:
  |  |  |--k8s-node1
  |  |  |--k8s-node2
  |  |  |--k8s-node3
  |--@kube_node:
  |  |--k8s-node4
  |  |--k8s-node5

  root@admin-lb:~/kubespray# ansible -i inventory/mycluster/inventory.ini k8s-node5 -m ping
k8s-node5 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

ansible-playbook -i inventory/mycluster/inventory.ini -v scale.yml --list-tasks
                                                                            .
                                                                            .
                                                                            .
      kubernetes/preinstall : Flush handlers    TAGS: [resolvconf]
      kubernetes/preinstall : Check if we are running inside a Azure VM        TAGS: [bootstrap_os, resolvconf]
      Run calico checks TAGS: [resolvconf]

ANSIBLE_FORCE_COLOR=true **ansible-playbook -i inventory/mycluster/inventory.ini -v scale.yml --limit=k8s-node5 -e kube_version="1.32.9" | tee kubespray_add_worker_node.log
                                                                            .
                                                                            .
                                                                            .
download : Extract_file | Unpacking archive ----------------------------- 1.50s
container-engine/nerdctl : Extract_file | Unpacking archive ------------- 1.38s
network_plugin/cni : CNI | Copy cni plugins ----------------------------- 1.21s

root@admin-lb:~/kubespray# kubectl get node -owide
NAME        STATUS   ROLES           AGE     VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                        KERNEL-VERSION                  CONTAINER-RUNTIME
k8s-node1   Ready    control-plane   3h43m   v1.32.9   192.168.10.11   <none>        Rocky Linux 10.0 (Red Quartz)   6.12.0-55.39.1.el10_0.aarch64   containerd://2.1.5
k8s-node2   Ready    control-plane   3h42m   v1.32.9   192.168.10.12   <none>        Rocky Linux 10.0 (Red Quartz)   6.12.0-55.39.1.el10_0.aarch64   containerd://2.1.5
k8s-node3   Ready    control-plane   3h42m   v1.32.9   192.168.10.13   <none>        Rocky Linux 10.0 (Red Quartz)   6.12.0-55.39.1.el10_0.aarch64   containerd://2.1.5
k8s-node4   Ready    <none>          3h42m   v1.32.9   192.168.10.14   <none>        Rocky Linux 10.0 (Red Quartz)   6.12.0-55.39.1.el10_0.aarch64   containerd://2.1.5
k8s-node5   Ready    <none>          46s     v1.32.9   192.168.10.15   <none>        Rocky Linux 10.0 (Red Quartz)   6.12.0-55.39.1.el10_0.aarch64   containerd://2.1.5

root@admin-lb:~/kubespray# kubectl get pod -n kube-system -owide | grep k8s-node5
kube-flannel-ds-arm64-rsd9r         1/1     Running   2 (63s ago)   68s     192.168.10.15   k8s-node5   <none>           <none>
kube-proxy-q4clq                    1/1     Running   0             68s     192.168.10.15   k8s-node5   <none>           <none>
nginx-proxy-k8s-node5               1/1     Running   0             67s     192.168.10.15   k8s-node5   <none>           <none>

root@admin-lb:~/kubespray# kubectl get pod -owide
NAME                      READY   STATUS    RESTARTS   AGE    IP            NODE        NOMINATED NODE   READINESS GATES
webpod-697b545f57-bhhvl   1/1     Running   0          130m   10.233.66.6   k8s-node4   <none>           <none>
webpod-697b545f57-hdq76   1/1     Running   0          130m   10.233.66.5   k8s-node4   <none>           <none>
root@admin-lb:~/kubespray# kubectl scale deployment webpod --replicas 1
deployment.apps/webpod scaled
root@admin-lb:~/kubespray# kubectl get pod -owide
NAME                      READY   STATUS    RESTARTS   AGE    IP            NODE        NOMINATED NODE   READINESS GATES
webpod-697b545f57-hdq76   1/1     Running   0          130m   10.233.66.5   k8s-node4   <none>           <none>
root@admin-lb:~/kubespray# kubectl scale deployment webpod --replicas 2
deployment.apps/webpod scaled
root@admin-lb:~/kubespray# kubectl get pod -owide
NAME                      READY   STATUS              RESTARTS   AGE    IP            NODE        NOMINATED NODE   READINESS GATES
webpod-697b545f57-2fdql   0/1     ContainerCreating   0          2s     <none>        k8s-node5   <none>           <none>
webpod-697b545f57-hdq76   1/1     Running             0          130m   10.233.66.5   k8s-node4   <none>           <none>**

b. 노드 제거

root@admin-lb:~/kubespray# cat <<EOF | kubectl apply -f -
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: webpod
  namespace: default
spec:
  maxUnavailable: 0
  selector:
    matchLabels:
      app: webpod
EOF
poddisruptionbudget.policy/webpod created
root@admin-lb:~/kubespray# kubectl get pdb
NAME     MIN AVAILABLE   MAX UNAVAILABLE   ALLOWED DISRUPTIONS   AGE
webpod   N/A             0                 0                     4s

ansible-playbook -i inventory/mycluster/inventory.ini -v remove-node.yml --list-tags
**ansible-playbook -i inventory/mycluster/inventory.ini -v remove-node.yml -e node=k8s-node5

cat << EOF > /root/kubespray/inventory/mycluster/inventory.ini**
*[**kube_control_plane**]
k8s-node1 ansible_host=192.168.10.11 ip=192.168.10.11 etcd_member_name=etcd1
k8s-node2 ansible_host=192.168.10.12 ip=192.168.10.12 etcd_member_name=etcd2
k8s-node3 ansible_host=192.168.10.13 ip=192.168.10.13 etcd_member_name=etcd3

[**etcd**:children]
kube_control_plane

[**kube_node**]
k8s-node4 ansible_host=192.168.10.14 ip=192.168.10.14
**k8s-node5 ansible_host=192.168.10.15 ip=192.168.10.15***
EOF

****ANSIBLE_FORCE_COLOR=true **ansible-playbook -i inventory/mycluster/inventory.ini -v scale.yml --limit=k8s-node5 -e kube_version="1.32.9" | tee kubespray_add_worker_node.log

root@admin-lb:~/kubespray# kubectl get node -owide
NAME        STATUS                     ROLES           AGE     VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                        KERNEL-VERSION                  CONTAINER-RUNTIME
k8s-node1   Ready                      control-plane   4h51m   v1.32.9   192.168.10.11   <none>        Rocky Linux 10.0 (Red Quartz)   6.12.0-55.39.1.el10_0.aarch64   containerd://2.1.5
k8s-node2   Ready                      control-plane   4h51m   v1.32.9   192.168.10.12   <none>        Rocky Linux 10.0 (Red Quartz)   6.12.0-55.39.1.el10_0.aarch64   containerd://2.1.5
k8s-node3   Ready                      control-plane   4h51m   v1.32.9   192.168.10.13   <none>        Rocky Linux 10.0 (Red Quartz)   6.12.0-55.39.1.el10_0.aarch64   containerd://2.1.5
k8s-node4   Ready                      <none>          4h51m   v1.32.9   192.168.10.14   <none>        Rocky Linux 10.0 (Red Quartz)   6.12.0-55.39.1.el10_0.aarch64   containerd://2.1.5
k8s-node5   Ready,SchedulingDisabled   <none>          69m     v1.32.9   192.168.10.15   <none>        Rocky Linux 10.0 (Red Quartz)   6.12.0-55.39.1.el10_0.aarch64   containerd://2.1.5

root@admin-lb:~/kubespray# kubectl scale deployment webpod --replicas 1
deployment.apps/webpod scaled

root@admin-lb:~/kubespray# kubectl get node -owide
NAME        STATUS                     ROLES           AGE     VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                        KERNEL-VERSION                  CONTAINER-RUNTIME
k8s-node1   Ready                      control-plane   4h52m   v1.32.9   192.168.10.11   <none>        Rocky Linux 10.0 (Red Quartz)   6.12.0-55.39.1.el10_0.aarch64   containerd://2.1.5
k8s-node2   Ready                      control-plane   4h52m   v1.32.9   192.168.10.12   <none>        Rocky Linux 10.0 (Red Quartz)   6.12.0-55.39.1.el10_0.aarch64   containerd://2.1.5
k8s-node3   Ready                      control-plane   4h52m   v1.32.9   192.168.10.13   <none>        Rocky Linux 10.0 (Red Quartz)   6.12.0-55.39.1.el10_0.aarch64   containerd://2.1.5
k8s-node4   Ready                      <none>          4h52m   v1.32.9   192.168.10.14   <none>        Rocky Linux 10.0 (Red Quartz)   6.12.0-55.39.1.el10_0.aarch64   containerd://2.1.5
k8s-node5   Ready,SchedulingDisabled   <none>          70m     v1.32.9   192.168.10.15   <none>        Rocky Linux 10.0 (Red Quartz)   6.12.0-55.39.1.el10_0.aarch64   containerd://2.1.5

root@admin-lb:~/kubespray# kubectl scale deployment webpod --replicas 2
deployment.apps/webpod scaled**
  1. 모니터링 설정
root@admin-lb:~/kubespray# kubectl create ns nfs-provisioner
namespace/nfs-provisioner created
root@admin-lb:~/kubespray# helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
"nfs-subdir-external-provisioner" has been added to your repositories
root@admin-lb:~/kubespray# helm install nfs-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner -n nfs-provisioner \
    --set nfs.server=192.168.10.10 \
    --set nfs.path=/srv/nfs/share \
    --set storageClass.defaultClass=true
NAME: nfs-provisioner
LAST DEPLOYED: Wed Feb  4 17:31:58 2026
NAMESPACE: nfs-provisioner
STATUS: deployed
REVISION: 1
TEST SUITE: None
root@admin-lb:~/kubespray# kubectl get sc
NAME                   PROVISIONER                                                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-client (default)   cluster.local/nfs-provisioner-nfs-subdir-external-provisioner   Delete          Immediate           true                   43s
root@admin-lb:~/kubespray# kubectl get pod -n nfs-provisioner -owide
NAME                                                              READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
nfs-provisioner-nfs-subdir-external-provisioner-b549b9dff-l25n2   1/1     Running   0          79s   10.233.66.8   k8s-node4   <none>           <none>

root@admin-lb:~/kubespray# helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
"prometheus-community" has been added to your repositories
root@admin-lb:~/kubespray# cat <<EOT > monitor-values.yaml
prometheus:
  prometheusSpec:
    scrapeInterval: "20s"
    evaluationInterval: "20s"
    storageSpec:
      volumeClaimTemplate:
        spec:
          accessModes: ["ReadWriteOnce"]
          resources:
            requests:
              storage: 10Gi
    additionalScrapeConfigs:
      - job_name: 'haproxy-metrics'
        static_configs:
          - targets:
              - '192.168.10.10:8405'
    externalLabels:
      cluster: "myk8s-cluster"
  service:
    type: NodePort
    nodePort: 30001

grafana:
  defaultDashboardsTimezone: Asia/Seoul
  adminPassword: prom-operator
  service:
    type: NodePort
    nodePort: 30002

alertmanager:
  enabled: false
defaultRules:
  create: false
kubeProxy:
  enabled: false
prometheus-windows-exporter:
  prometheus:
    monitor:
      enabled: false
EOT

helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack --version **80.13.3** \
-f **monitor-values.yaml** --create-namespace --namespace **monitoring**

root@admin-lb:~/kubespray# kubectl exec -it sts/prometheus-kube-prometheus-stack-prometheus -n monitoring -c prometheus -- prometheus --version
prometheus, version 3.9.1 (branch: HEAD, revision: 9ec59baffb547e24f1468a53eb82901e58feabd8)
  build user:       root@61c3a9212c9e
  build date:       20260107-16:11:39
  go version:       go1.25.5
  platform:         linux/arm64
  tags:             netgo,builtinassets

 

root@admin-lb:~/kubespray# sed -i -e 's/${DS_PROMETHEUS}/prometheus/g' 12693_rev12.json
root@admin-lb:~/kubespray# sed -i -e 's/${DS__VICTORIAMETRICS-PROD-ALL}/prometheus/g' 15661_rev2.json
root@admin-lb:~/kubespray# sed -i -e 's/${DS_PROMETHEUS}/prometheus/g' k8s-system-api-server.json

root@admin-lb:~/kubespray# kubectl create configmap my-dashboard  --from-file=12693_rev12.json --from-file=15661_rev2.json --from-file=k8s-system-api-server.json -n monitoring
configmap/my-dashboard created
root@admin-lb:~/kubespray# kubectl label configmap my-dashboard grafana_dashboard="1" -n monitoring
configmap/my-dashboard labeled
root@admin-lb:~/kubespray# kubectl exec -it -n monitoring deploy/kube-prometheus-stack-grafana -- ls -l /tmp/dashboards
error: cannot exec into a container in a completed pod; current phase is Succeeded

root@admin-lb:~/kubespray# cat << EOF >> inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
etcd_metrics: true
etcd_listen_metrics_urls: "http://0.0.0.0:2381"
EOF

ansible-playbook -i inventory/mycluster/inventory.ini -v cluster.yml --tags "etcd" --list-tasks
**ansible-playbook -i inventory/mycluster/inventory.ini -v cluster.yml --tags "etcd" --limit etcd -e kube_version="1.32.9"**

root@admin-lb:~/kubespray# ssh k8s-node1 etcdctl.sh member list -w table
+------------------+---------+-------+----------------------------+----------------------------+------------+
|        ID        | STATUS  | NAME  |         PEER ADDRS         |        CLIENT ADDRS        | IS LEARNER |
+------------------+---------+-------+----------------------------+----------------------------+------------+
|  8b0ca30665374b0 | started | etcd3 | https://192.168.10.13:2380 | https://192.168.10.13:2379 |      false |
| 2106626b12a4099f | started | etcd2 | https://192.168.10.12:2380 | https://192.168.10.12:2379 |      false |
| c6702130d82d740f | started | etcd1 | https://192.168.10.11:2380 | https://192.168.10.11:2379 |      false |
+------------------+---------+-------+----------------------------+----------------------------+------------+

root@admin-lb:~/kubespray# helm upgrade kube-prometheus-stack prometheus-community/kube-prometheus-stack --version 80.13.3 \
--reuse-values -f monitor-add-values.yaml --namespace monitoring

root@admin-lb:~/kubespray# helm upgrade kube-prometheus-stack prometheus-community/kube-prometheus-stack --version 80.13.3 \
--reuse-values -f monitor-add-values.yaml --namespace monitoring
Release "kube-prometheus-stack" has been upgraded. Happy Helming!
NAME: kube-prometheus-stack
LAST DEPLOYED: Wed Feb  4 17:45:41 2026
NAMESPACE: monitoring
STATUS: deployed
REVISION: 2
NOTES:
kube-prometheus-stack has been installed. Check its status by running:
  kubectl --namespace monitoring get pods -l "release=kube-prometheus-stack"

Get Grafana 'admin' user password by running:

  kubectl --namespace monitoring get secrets kube-prometheus-stack-grafana -o jsonpath="{.data.admin-password}" | base64 -d ; echo

Access Grafana local instance:

  export POD_NAME=$(kubectl --namespace monitoring get pod -l "app.kubernetes.io/name=grafana,app.kubernetes.io/instance=kube-prometheus-stack" -oname)
  kubectl --namespace monitoring port-forward $POD_NAME 3000

Get your grafana admin user password by running:

  kubectl get secret --namespace monitoring -l app.kubernetes.io/component=admin-secret -o jsonpath="{.items[0].data.admin-password}" | base64 --decode ; echo

Visit https://github.com/prometheus-operator/kube-prometheus for instructions on how to create & configure Alertmanager and Prometheus instances using the Operator.
  1. 업그레이드
# 관련 변수 검색
grep -Rni "flannel" inventory/mycluster/ playbooks/ roles/ --include="*.yml" -A2 -B1
                                                                            .
                                                                            .
                                                                            .
roles/validate_inventory/tasks/main.yml-169-    that:
roles/validate_inventory/tasks/main.yml:170:      - kube_network_plugin in ['calico', 'flannel', 'cloud', 'cilium', 'cni', 'kube-ovn', 'kube-router', 'macvlan', 'custom_cni', 'none']
roles/validate_inventory/tasks/main.yml-171-      - dns_mode in ['coredns', 'coredns_dual', 'manual', 'none']
roles/validate_inventory/tasks/main.yml-172-      - kube_proxy_mode in ['iptables', 'ipvs', 'nftables']

root@admin-lb:~/kubespray# kubectl get ds -n kube-system -owide
NAME                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE   CONTAINERS     IMAGES                               SELECTOR
kube-flannel              0         0         0       0            0           <none>                   22h   kube-flannel   docker.io/flannel/flannel:v0.27.3    app=flannel
kube-flannel-ds-arm       0         0         0       0            0           <none>                   22h   kube-flannel   docker.io/flannel/flannel:v0.27.3    app=flannel
kube-flannel-ds-arm64     5         5         5       5            5           <none>                   22h   kube-flannel   docker.io/flannel/flannel:v0.27.3    app=flannel
kube-flannel-ds-ppc64le   0         0         0       0            0           <none>                   22h   kube-flannel   docker.io/flannel/flannel:v0.27.3    app=flannel
kube-flannel-ds-s390x     0         0         0       0            0           <none>                   22h   kube-flannel   docker.io/flannel/flannel:v0.27.3    app=flannel
kube-proxy                5         5         5       5            5           kubernetes.io/os=linux   22h   kube-proxy     registry.k8s.io/kube-proxy:v1.32.9   k8s-app=kube-proxy

root@admin-lb:~/kubespray# cat << EOF >> inventory/mycluster/group_vars/k8s_cluster/k8s-net-flannel.yml
flannel_version: 0.27.4
EOF
root@admin-lb:~/kubespray# grep "^[^#]" inventory/mycluster/group_vars/k8s_cluster/k8s-net-flannel.yml
flannel_interface: enp0s9
flannel_version: 0.27.4

ansible-playbook -i inventory/mycluster/inventory.ini -v upgrade-cluster.yml --tags "flannel" --list-tasks
**ansible-playbook -i inventory/mycluster/inventory.ini -v upgrade-cluster.yml --tags "flannel" --limit k8s-node3 -e kube_version="1.32.9"**

**ansible-playbook -i inventory/mycluster/inventory.ini -v upgrade-cluster.yml --tags "flannel" -e kube_version="1.32.9"

root@admin-lb:~/kubespray# kubectl get ds -n kube-system -owide
NAME                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE   CONTAINERS     IMAGES                               SELECTOR
kube-flannel              0         0         0       0            0           <none>                   28h   kube-flannel   docker.io/flannel/flannel:v0.27.4    app=flannel
kube-flannel-ds-arm       0         0         0       0            0           <none>                   28h   kube-flannel   docker.io/flannel/flannel:v0.27.4    app=flannel
kube-flannel-ds-arm64     5         5         4       4            4           <none>                   28h   kube-flannel   docker.io/flannel/flannel:v0.27.4    app=flannel
kube-flannel-ds-ppc64le   0         0         0       0            0           <none>                   28h   kube-flannel   docker.io/flannel/flannel:v0.27.4    app=flannel
kube-flannel-ds-s390x     0         0         0       0            0           <none>                   28h   kube-flannel   docker.io/flannel/flannel:v0.27.4    app=flannel
kube-proxy                5         5         5       5            5           kubernetes.io/os=linux   28h   kube-proxy     registry.k8s.io/kube-proxy:v1.32.9   k8s-app=kube-proxy

root@admin-lb:~/kubespray# kubectl get pod -n kube-system -l app=flannel -owide
NAME                          READY   STATUS    RESTARTS   AGE    IP              NODE        NOMINATED NODE   READINESS GATES
kube-flannel-ds-arm64-5v4wx   1/1     Running   0          71s    192.168.10.11   k8s-node1   <none>           <none>
kube-flannel-ds-arm64-62btm   1/1     Running   0          94s    192.168.10.12   k8s-node2   <none>           <none>
kube-flannel-ds-arm64-6xvwt   1/1     Running   0          61s    192.168.10.13   k8s-node3   <none>           <none>
kube-flannel-ds-arm64-mqpgb   1/1     Running   0          106s   192.168.10.15   k8s-node5   <none>           <none>
kube-flannel-ds-arm64-xlkdc   1/1     Running   0          83s    192.168.10.14   k8s-node4   <none>           <none>**

# limit 사용 전 모든 노드의 facts 캐시 최신화
ansible-playbook playbooks/facts.yml -b -i inventory/sample/hosts.ini

# 컨트롤 플레인
ansible-playbook upgrade-cluster.yml -b -i inventory/sample/hosts.ini -e kube_version=1.20.7 --limit "kube_control_plane:etcd"

# 워커 노드
ansible-playbook upgrade-cluster.yml -b -i inventory/sample/hosts.ini -e kube_version=1.20.7 --limit "node4:node6:node7:node12"
ansible-playbook upgrade-cluster.yml -b -i inventory/sample/hosts.ini -e kube_version=1.20.7 --limit "node5*"

# 워컨 노드 1번에 1대의 노드만 업그레이드
ansible-playbook upgrade-cluster.yml -b -i inventory/sample/hosts.ini -e kube_version=1.20.7 **-e "serial=1"**

10-1. k8s upgrade : 1.32.9 → 1.32.10

ANSIBLE_FORCE_COLOR=true **ansible-playbook -i inventory/mycluster/inventory.ini -v upgrade-cluster.yml -e kube_version="1.32.10" --limit "kube_control_plane:etcd"** **| tee kubespray_upgrade.log
                                                            .
                                                            .
                                                            .
download : Download_file | Download item -------------------------------- 7.85s
network_plugin/cni : CNI | Copy cni plugins ----------------------------- 7.61s
etcd : Gen_certs | Write etcd member/admin and kube_control_plane client certs to other etcd nodes --- 6.91s
network_plugin/cni : CNI | Copy cni plugins ----------------------------- 6.61s
system_packages : Manage packages --------------------------------------- 6.53s
container-engine/containerd : Containerd | Unpack containerd archive ---- 5.73s
download : Prep_kubeadm_images | Copy kubeadm binary from download dir to system path --- 4.90s

root@admin-lb:~/kubespray# kubectl get node -owide
NAME        STATUS                     ROLES           AGE   VERSION    INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                        KERNEL-VERSION                  CONTAINER-RUNTIME
k8s-node1   Ready                      control-plane   29h   v1.32.10   192.168.10.11   <none>        Rocky Linux 10.0 (Red Quartz)   6.12.0-55.39.1.el10_0.aarch64   containerd://2.1.5
k8s-node2   Ready                      control-plane   29h   v1.32.10   192.168.10.12   <none>        Rocky Linux 10.0 (Red Quartz)   6.12.0-55.39.1.el10_0.aarch64   containerd://2.1.5
k8s-node3   Ready                      control-plane   29h   v1.32.10   192.168.10.13   <none>        Rocky Linux 10.0 (Red Quartz)   6.12.0-55.39.1.el10_0.aarch64   containerd://2.1.5
k8s-node4   Ready                      <none>          29h   v1.32.9    192.168.10.14   <none>        Rocky Linux 10.0 (Red Quartz)   6.12.0-55.39.1.el10_0.aarch64   containerd://2.1.5
k8s-node5   Ready,SchedulingDisabled   <none>          25h   v1.32.9    192.168.10.15   <none>        Rocky Linux 10.0 (Red Quartz)   6.12.0-55.39.1.el10_0.aarch64   containerd://2.1.5**

ansible-playbook upgrade-cluster.yml -b -i inventory/sample/hosts.ini -e kube_version=1.20.7 **-e "serial=1"
# wk upgrade 2분 + 2분 소요 : (최초 한번) kube-proxy ds all node 재기동??? 1.32.9 → 1.32.10 
ansible-playbook -i inventory/mycluster/inventory.ini -v upgrade-cluster.yml -e kube_version="1.32.10" --limit "k8s-node5"**
# 확인 후 나머지 노드 실행
**ansible-playbook -i inventory/mycluster/inventory.ini -v upgrade-cluster.yml -e kube_version="1.32.10" --limit "k8s-node4"

TASK [upgrade/pre-upgrade : Cordon node] ***************************************
changed: [k8s-node4 -> k8s-node1(192.168.10.11)] => {"changed": true, "cmd": ["/usr/local/bin/kubectl", "--kubeconfig", "/etc/kubernetes/admin.conf", "cordon", "k8s-node4"], "delta": "0:00:00.073049", "end": "2026-02-05 16:14:18.040359", "msg": "", "rc": 0, "start": "2026-02-05 16:14:17.967310", "stderr": "", "stderr_lines": [], "stdout": "node/k8s-node4 cordoned", "stdout_lines": ["node/k8s-node4 cordoned"]}
Thursday 05 February 2026  15:55:32 +0900 (0:00:00.245)       0:01:00.392 ***** 
                                                        .
                                                        .
                                                        .**

'Study > K8S-Deploy' 카테고리의 다른 글

K8S ) 7주차 과제  (0) 2026.02.18
K8S ) 6주차 과제  (1) 2026.02.13
K8S) 4주차 과제  (0) 2026.01.31
K8S)3주차 과제  (0) 2026.01.24
K8S)2주차 과제  (0) 2026.01.15

a. Kubespray란?

  • Kubespray는 Ansible 기반으로 k8s cluster를 자동으로 설치/업그레이드/관리하기 위한 오프소스 배포 도구이다 ( kubeadm 처럼 k8s 관련 도구 중 1 ) 
    • 역할 및 기능 ( 클러스터 운영 전반을 지원 )
      • 신규 클러스터 생성
      • ( Control Plane ) 클러스터 업그레이드
      • 클러스터 스케일링
      • 노드 관리 - 노드 추가, 노드 제거
      • 클러스터 재설정
      • 설정 관리
      • 백업 / 복구, 업그레이드시 ETCD 스냅샷 수행
    • Kubespray 한 버전당 Kubernetes 3개 minor 지원
      • 항상 1~2 버전 늦춰서 안정화 후 포함
      • 운영 시 버전 추천
        • Dev 환경 : Kubespray 최신 + K8s N-1
        • Prd 환경 : Kubespray 최신-1 + K8s N-2

  • kubespray 소개 및 사용 이유 : Ansible 기반이기에 ssh만 연결된다면 관리에 용이하다.
    퍼블릭/폐쇄망의 서버환경에서도 쿠버네티스가 사용가능하다.
  • 컨트롤 플레인과 ETCD에 대한 HA환경을 지원한다.
  • Client Side LB를 지원하여 분산 접속을 가능하게끔 지원한다.(kubeadm의 경우 직접세팅필요)
  • Certificate에 대해서도 Auto Renew를 제공해서 자동으로 갱신처리한다.
  • BestPractice의 설정들을 Playbook형태로 제공을 해준다.
  • 다양한 Linux배포판을 지원한다.

b. 실습 환경

  1. 사전 환경 설정
# 파일 다운로드
wget https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/k8s-kubespary/Vagrantfile
wget https://raw.githubusercontent.com/gasida/vagrant-lab/refs/heads/main/k8s-kubespary/init_cfg.sh

## file 확인
ll
total 16
-rw-r--r--@ 1 howoo  staff   982B Jan 28 15:42 Vagrantfile
-rw-r--r--@ 1 howoo  staff   1.3K Jan 28 15:43 init_cfg.sh

## 실습 환경 배포
vagrant up
                                                                        .
                                                                        .
                                                                        .
    k8s-ctr: Running: /var/folders/s_/d0ls80f161x0q83j7lx_k5wh0000gn/T/vagrant-shell20260128-6889-zu7q2i.sh
    k8s-ctr: >>>> Initial Config Start <<<<
    k8s-ctr: [TASK 1] Change Timezone and Enable NTP
    k8s-ctr: [TASK 2] Disable firewalld and selinux
    k8s-ctr: [TASK 3] Disable and turn off SWAP & Delete swap partitions
    k8s-ctr: [TASK 4] Config kernel & module
    k8s-ctr: [TASK 5] Setting Local DNS Using Hosts file
    k8s-ctr: [TASK 6] Delete default routing - enp0s9 NIC
    k8s-ctr: >>>> Initial Config End <<<<
 howoo@ttokkang-ui-MacBookAir  ~/Desktop/work/Gasida_series/idc_k8s/k8s-kubespary  vagrant status
Current machine states:

k8s-ctr                   running (virtualbox)

The VM is running. To stop this VM, you can run `vagrant halt` to
shut it down forcefully, or you can run `vagrant suspend` to simply
suspend the virtual machine. In either case, to restart it again,
simply run `vagrant up`.

## ssh 접속 후 설정
vagrant status
Current machine states:

k8s-ctr                   running (virtualbox)
------------------------------
root@k8s-ctr:~# uname -a
Linux k8s-ctr 6.12.0-55.39.1.el10_0.aarch64 #1 SMP PREEMPT_DYNAMIC Wed Oct 15 11:18:23 EDT 2025 aarch64 GNU/Linux
root@k8s-ctr:~# which python  && python -V
/usr/bin/python
Python 3.12.9
root@k8s-ctr:~# which python3 && python3 -V
/usr/bin/python3
Python 3.12.9

oot@k8s-ctr:~# dnf install -y python3-pip git
Rocky Linux 10 - BaseOS                      0.0  B/s |   0  B     00:00    
Errors during downloading metadata for repository 'baseos':
  - Curl error (6): Could not resolve hostname for https://mirrors.rockylinux.org/mirrorlist?arch=aarch64&repo=BaseOS-10 [Could not resolve host: mirrors.rockylinux.org]
Error: Failed to download metadata for repo 'baseos': Cannot prepare internal mirrorlist: Curl error (6): Could not resolve hostname for https://mirrors.rockylinux.org/mirrorlist?arch=aarch64&repo=BaseOS-10 [Could not resolve host: mirrors.rockylinux.org]
root@k8s-ctr:~# vim /etc/resolv.conf 
root@k8s-ctr:~# dnf install -y python3-pip git
Rocky Linux 10 - BaseOS                      531 kB/s |  12 MB     00:23    
Rocky Linux 10 - AppStream                   123 kB/s | 2.1 MB     00:17    
Rocky Linux 10 - Extras                      384  B/s | 6.2 kB     00:16    
Dependencies resolved.
                                                                .
                                                                .
                                                                .
Complete!

root@k8s-ctr:~# which pip  && pip -V
/usr/bin/pip
pip 23.3.2 from /usr/lib/python3.12/site-packages/pip (python 3.12)
root@k8s-ctr:~# which pip3 && pip3 -V
/usr/bin/pip3
pip 23.3.2 from /usr/lib/python3.12/site-packages/pip (python 3.12)

root@k8s-ctr:~# echo "root:qwe123" | chpasswd
root@k8s-ctr:~# cat << EOF >> /etc/ssh/sshd_config
PermitRootLogin yes
PasswordAuthentication yes
EOF
root@k8s-ctr:~# systemctl restart sshd

root@k8s-ctr:~# ssh-keygen -t rsa -N "" -f /root/.ssh/id_rsa
Generating public/private rsa key pair.
Your identification has been saved in /root/.ssh/id_rsa
Your public key has been saved in /root/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:OdD1TkSOOxJl5wHu/WPmZQtqdOE3GS053Kjwxxvb8xU root@k8s-ctr
The key's randomart image is:
+---[RSA 3072]----+
|          =o=    |
|       . = B .   |
|      . o o =. +.|
|       . +.= .*.o|
|        S +o+o.Eo|
|         o oo+++.|
|          . .o*==|
|           ..++=+|
|          ..  ..+|
+----[SHA256]-----+

root@k8s-ctr:~# ls -al ~/.ssh/
total 8
drwx------. 2 root root   38 Jan 28 15:52 .
dr-xr-x---. 3 root root  119 Jan 28 15:49 ..
-rw-------. 1 root root 2602 Jan 28 15:52 id_rsa
-rw-r--r--. 1 root root  566 Jan 28 15:52 id_rsa.pub
root@k8s-ctr:~# ssh-copy-id -o StrictHostKeyChecking=no root@192.168.10.10
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.10.10's password: 

Number of key(s) added: 1

Now try logging into the machine, with: "ssh -o 'StrictHostKeyChecking=no' 'root@192.168.10.10'"
and check to make sure that only the key(s) you wanted were added.

root@k8s-ctr:~# cat /root/.ssh/authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfVlv8LgLkkE5XoaKF4C6cPFxHBU2SVTnC20NamU03ITKSdZl/T7TJrIF2UBt/P1lQgCB5LImQYJVY06nSygYgIQd7BBxeXvpZ0kgYA2sXn1FRsuu3feTaJZQ1dAee0ZhMJfL7JEAKSLyvdnynCbvOXwVcgvW8EnOA1U+DFdQBBKLlGlMC89YLVKAz9KRTArAM4XsFKlYYR6OPYTDderiNNITQMEiT6BpJE43P+ai1nnIjc2IOzWItsziSnROzPoedfQcNC9lbqyg/lco+5D+MCT32rcs1mxLdI1tvPSMC9RqNpEUNk5t1FRFl6Fn5PJ7fk7aOOpW3H74uoxNqmmXDcjBOnsnX9f+Igv4VPZkigYk/glMbxsTOfgUwVBSH39UaiW7JWdq+taa2VNf9QVf3Ucdde6mGg4V9HNqHzvP9B7deo4YSaSpAFzJd1Vwle9cQzc3tiMBPUOZRxM0NOjWaAux5k0iu+In++iFVeFcLDRvHN+2JSwiKONRPP1ofgY0= root@k8s-ctr
root@k8s-ctr:~# ssh root@192.168.10.10 hostname
k8s-ctr
root@k8s-ctr:~# ssh -o StrictHostKeyChecking=no root@k8s-ctr hostname
Warning: Permanently added 'k8s-ctr' (ED25519) to the list of known hosts.
k8s-ctr
root@k8s-ctr:~# ssh root@k8s-ctr hostname
k8s-ctr

# 환경 설정
**pip3 install -r /root/kubespray/requirements.txt
                                                    .
                                                    .
                                                    .
Downloading ansible-10.7.0-py3-none-any.whl (51.6 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 51.6/51.6 MB 6.1 MB/s eta 0:00:00
Downloading cryptography-46.0.2-cp311-abi3-manylinux_2_34_aarch64.whl (4.3 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.3/4.3 MB 6.7 MB/s eta 0:00:00
Downloading jmespath-1.0.1-py3-none-any.whl (20 kB)
Downloading netaddr-1.3.0-py3-none-any.whl (2.3 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.3/2.3 MB 6.0 MB/s eta 0:00:00
Downloading ansible_core-2.17.14-py3-none-any.whl (2.2 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.2/2.2 MB 5.6 MB/s eta 0:00:00
Downloading cffi-2.0.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl (220 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 220.1/220.1 kB 6.5 MB/s eta 0:00:00
Downloading jinja2-3.1.6-py3-none-any.whl (134 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 134.9/134.9 kB 9.8 MB/s eta 0:00:00
Downloading resolvelib-1.0.1-py2.py3-none-any.whl (17 kB)
Downloading pycparser-3.0-py3-none-any.whl (48 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 48.2/48.2 kB 3.6 MB/s eta 0:00:00
Downloading markupsafe-3.0.3-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl (24 kB)
Installing collected packages: resolvelib, pycparser, netaddr, MarkupSafe, jmespath, jinja2, cffi, cryptography, ansible-core, ansible
Successfully installed MarkupSafe-3.0.3 ansible-10.7.0 ansible-core-2.17.14 cffi-2.0.0 cryptography-46.0.2 jinja2-3.1.6 jmespath-1.0.1 netaddr-1.3.0 pycparser-3.0 resolvelib-1.0.1
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv

root@k8s-ctr:~/kubespray# which ansible
/usr/local/bin/ansible
root@k8s-ctr:~/kubespray# ansible --version
ansible [core 2.17.14]
  config file = /root/kubespray/ansible.cfg
  configured module search path = ['/root/kubespray/library']
  ansible python module location = /usr/local/lib/python3.12/site-packages/ansible
  ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
  executable location = /usr/local/bin/ansible
  python version = 3.12.9 (main, Aug 14 2025, 00:00:00) [GCC 14.2.1 20250110 (Red Hat 14.2.1-7)] (/usr/bin/python3)
  jinja version = 3.1.6
  libyaml = True
root@k8s-ctr:~/kubespray# pip list
Package                   Version
------------------------- -----------
ansible                   10.7.0
ansible-core              2.17.14
attrs                     23.2.0
                                                                .
                                                                .
                                                                .**
  1. Kubespray를 통한 K8s 배포
root@k8s-ctr:~/kubespray# cp -rfp /root/kubespray/inventory/sample /root/kubespray/inventory/mycluster
root@k8s-ctr:~/kubespray# tree inventory/mycluster/
inventory/mycluster/
├── group_vars
│   ├── all
│   │   ├── all.yml
                                        .
                                        .
                                        .

root@k8s-ctr:~/kubespray# cat << EOF > /root/kubespray/inventory/mycluster/inventory.ini
k8s-ctr ansible_host=192.168.10.10 ip=192.168.10.10

[kube_control_plane]
k8s-ctr

[etcd:children]
kube_control_plane

[kube_node]
k8s-ctr
EOF

# 테스트할 기능 관련 수정
root@k8s-ctr:~/kubespray# sed -i 's|kube_network_plugin: calico|kube_network_plugin: flannel|g' inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
root@k8s-ctr:~/kubespray# sed -i 's|kube_proxy_mode: ipvs|kube_proxy_mode: iptables|g' inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
root@k8s-ctr:~/kubespray# sed -i 's|enable_nodelocaldns: true|enable_nodelocaldns: false|g' inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
root@k8s-ctr:~/kubespray# sed -i 's|auto_renew_certificates: false|auto_renew_certificates: true|g' inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
root@k8s-ctr:~/kubespray# sed -i 's|# auto_renew_certificates_systemd_calendar|auto_renew_certificates_systemd_calendar|g' inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml

# flannel 설정 수정
root@k8s-ctr:~/kubespray# echo "flannel_interface: enp0s9" >> inventory/mycluster/group_vars/k8s_cluster/k8s-net-flannel.yml
root@k8s-ctr:~/kubespray# grep "^[^#]" inventory/mycluster/group_vars/k8s_cluster/k8s-net-flannel.yml
flannel_interface: enp0s9

root@k8s-ctr:~/kubespray# sed -i 's|helm_enabled: false|helm_enabled: true|g' inventory/mycluster/group_vars/k8s_cluster/addons.yml
root@k8s-ctr:~/kubespray# sed -i 's|metrics_server_enabled: false|metrics_server_enabled: true|g' inventory/mycluster/group_vars/k8s_cluster/addons.yml
root@k8s-ctr:~/kubespray# sed -i 's|node_feature_discovery_enabled: false|node_feature_discovery_enabled: true|g' inventory/mycluster/group_vars/k8s_cluster/addons.yml

root@k8s-ctr:~/kubespray# ls -al ./*.txt
-rw-r--r--. 1 root root   631 Jan 28 16:04 ./df-1.txt
-rw-r--r--. 1 root root  3241 Jan 28 16:04 ./findmnt-1.txt
-rw-r--r--. 1 root root  1459 Jan 28 16:04 ./ip_addr-1.txt
-rw-r--r--. 1 root root   181 Jan 28 15:54 ./requirements.txt
-rw-r--r--. 1 root root   696 Jan 28 16:04 ./ss-1.txt
-rw-r--r--. 1 root root 44424 Jan 28 16:04 ./sysctl-1.txt

## 배포
ansible-playbook -i inventory/mycluster/inventory.ini -v cluster.yml -e kube_version="1.33.3" **--list-tasks** # 배포 전, Task 목록 확인
ANSIBLE_FORCE_COLOR=true **ansible-playbook -i inventory/mycluster/inventory.ini -v cluster.yml -e kube_version="1.33.3" | tee kubespray_install.log**
                                                                .
                                                                .
                                                                .
download : Download_file | Download item -------------------------------- 7.34s
container-engine/nerdctl : Download_file | Download item ---------------- 7.22s
container-engine/runc : Download_file | Download item ------------------- 7.17s
  1. alias, 명령어 자동 완성
# Source the completion
source <(kubectl completion bash)
source <(kubeadm completion bash)

# Alias kubectl to k
alias k=kubectl
complete -o default -F __start_kubectl k

# k9s 설치 : https://github.com/derailed/k9s
CLI_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
wget https://github.com/derailed/k9s/releases/latest/download/k9s_linux_${CLI_ARCH}.tar.gz
tar -xzf k9s_linux_*.tar.gz
ls -al k9s
chown root:root k9s
mv k9s /usr/local/bin/
chmod +x /usr/local/bin/k9s
k9s

  1. 환경 설정 및 적용
root@k8s-ctr:~/kubespray# sysctl fs.file-max
fs.file-max = 9223372036854775807
root@k8s-ctr:~/kubespray# cat /proc/sys/fs/file-max
9223372036854775807
root@k8s-ctr:~/kubespray# ulimit -n
1024
root@k8s-ctr:~/kubespray# systemctl show kubelet | grep LimitNOFILE
LimitNOFILE=524288
LimitNOFILESoft=1024

root@k8s-ctr:~/kubespray# cat << EOF >> inventory/mycluster/group_vars/all/containerd.yml
containerd_default_base_runtime_spec_patch:
  process:
    rlimits: []
EOF

**ansible-playbook -i inventory/mycluster/inventory.ini -v cluster.yml --tags "container-engine" --limit k8s-ctr -e kube_version="1.33.3"

root@k8s-ctr:~/kubespray# kubectl delete pod ubuntu
pod "ubuntu" deleted
root@k8s-ctr:~/kubespray# cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: ubuntu
spec:
  containers:
  - name: ubuntu
    image: ubuntu
    command: ["sh", "-c", "sleep infinity"]
    securityContext:
      privileged: true
EOF
pod/ubuntu created
root@k8s-ctr:~/kubespray# kubectl exec -it ubuntu -- sh -c 'ulimit -a'
time(seconds)        unlimited
file(blocks)         unlimited
data(kbytes)         unlimited
stack(kbytes)        8192
coredump(blocks)     unlimited
memory(kbytes)       unlimited
locked memory(kbytes) unlimited
process              unlimited
nofiles              1048576
vmemory(kbytes)      unlimited
locks                unlimited
rtprio               0

ansible-playbook -i inventory/mycluster/inventory.ini -v cluster.yml --tags "container-engine" --list-tasks
                                            .
                                            .
                                            .
  play #15 (k8s_cluster): Apply resolv.conf changes now that cluster DNS is up    TAGS: []
    tasks:

## Script 확인

                                            .
                                            .
                                            .
/registry/services/specs/node-feature-discovery/node-feature-discovery-master

compact_rev_key

root@k8s-ctr:~# etcdctl.sh member list -w table
+------------------+---------+-------+----------------------------+----------------------------+------------+
|        ID        | STATUS  | NAME  |         PEER ADDRS         |        CLIENT ADDRS        | IS LEARNER |
+------------------+---------+-------+----------------------------+----------------------------+------------+
| a997582217e26c7f | started | etcd1 | https://192.168.10.10:2380 | https://192.168.10.10:2379 |      false |
+------------------+---------+-------+----------------------------+----------------------------+------------+

# 설치된 정보 확인
root@k8s-ctr:~/kubespray# cat sysctl-1.txt | grep net.ipv4.ip_local_reserved_ports
net.ipv4.ip_local_reserved_ports = 
root@k8s-ctr:~/kubespray# cat sysctl-2.txt | grep net.ipv4.ip_local_reserved_ports
net.ipv4.ip_local_reserved_ports = 30000-32767
root@k8s-ctr:~/kubespray# sysctl net.ipv4.ip_local_reserved_ports
net.ipv4.ip_local_reserved_ports = 30000-32767

# node별 최대 파드 배치 개수 확인
root@k8s-ctr:~/kubespray# kubectl describe node
  kube-system                 metrics-server-7cd7f9897-f9ngp                    100m (2%)     100m (2%)   200Mi (6%)       200Mi (6%)     2d3h
  node-feature-discovery      node-feature-discovery-gc-6c9b8f4657-drclc        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2d3h
  node-feature-discovery      node-feature-discovery-master-6989794b78-gfvcx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2d3h
  node-feature-discovery      node-feature-discovery-worker-q44fg               0 (0%)        0 (0%)      0 (0%)           0 (0%)         2d3h
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests        Limits
  --------           --------        ------
  cpu                920m (27%)      400m (11%)
  memory             349220Ki (11%)  1024288000 (33%)
  ephemeral-storage  0 (0%)          0 (0%)
  hugepages-1Gi      0 (0%)          0 (0%)
  hugepages-2Mi      0 (0%)          0 (0%)
  hugepages-32Mi     0 (0%)          0 (0%)
  hugepages-64Ki     0 (0%)          0 (0%)
Events:
  Type     Reason                   Age                From             Message
  ----     ------                   ----               ----             -------
  Normal   Starting                 10m                kube-proxy       
  Normal   Starting                 11m                kubelet          Starting kubelet.
  Warning  InvalidDiskCapacity      11m                kubelet          invalid capacity 0 on image filesystem
  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node k8s-ctr status is now: NodeHasSufficientMemory
  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node k8s-ctr status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node k8s-ctr status is now: NodeHasSufficientPID
  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
  Warning  Rebooted                 10m                kubelet          Node k8s-ctr has been rebooted, boot id: 1926a28f-f34f-4605-b1bd-98e6b897d174
  Normal   RegisteredNode           10m                node-controller  Node k8s-ctr event: Registered Node k8s-ctr in Controller
root@k8s-ctr:~/kubespray# kubectl describe node | grep pods
  pods:               110
  pods:               110
  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods

  # 설치된 정보 확인
root@k8s-ctr:~/kubespray# ls -al | grep block
root@k8s-ctr:~/kubespray# kubectl get pod -n kube-system -l tier=control-plane
NAME                              READY   STATUS    RESTARTS      AGE
kube-apiserver-k8s-ctr            1/1     Running   5 (14m ago)   2d3h
kube-controller-manager-k8s-ctr   1/1     Running   6 (14m ago)   2d3h
kube-scheduler-k8s-ctr            1/1     Running   5 (14m ago)   2d3h

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      Jan 26, 2036 07:13 UTC   9y              no      
front-proxy-ca          Jan 26, 2036 07:13 UTC   9y              no**  
  1. Invoke kubeadm and install a CNI
root@k8s-ctr:~/kubespray# tree roles/network_plugin/ -L 1
roles/network_plugin/
├── calico
├── calico_defaults
├── cilium
├── cni
├── custom_cni
├── flannel
├── kube-ovn
├── kube-router
├── macvlan
├── meta
├── multus
└── ovn4nfv

13 directories, 0 files

root@k8s-ctr:~/kubespray# tree roles/network_plugin/cni/
roles/network_plugin/cni/
├── defaults
│   └── main.yml
└── tasks
    └── main.yml

3 directories, 2 files
root@k8s-ctr:~/kubespray# tree roles/network_plugin/flannel/
roles/network_plugin/flannel/
├── defaults
│   └── main.yml
├── meta
│   └── main.yml
├── tasks
│   ├── main.yml
│   └── reset.yml
└── templates
    ├── cni-flannel-rbac.yml.j2
    └── cni-flannel.yml.j2

5 directories, 6 files
  1. Core DNS & DNS-AutoScaler
root@k8s-ctr:~/kubespray# kubectl get deployment -n kube-system coredns dns-autoscaler -o wide
NAME             READY   UP-TO-DATE   AVAILABLE   AGE    CONTAINERS   IMAGES                                                       SELECTOR
coredns          1/1     1            1           2d4h   coredns      registry.k8s.io/coredns/coredns:v1.12.0                      k8s-app=kube-dns
dns-autoscaler   1/1     1            1           2d4h   autoscaler   registry.k8s.io/cpa/cluster-proportional-autoscaler:v1.8.8   k8s-app=dns-autoscaler

root@k8s-ctr:~/kubespray# kubectl describe cm -n kube-system coredns
Name:         coredns
Namespace:    kube-system
Labels:       addonmanager.kubernetes.io/mode=EnsureExists
Annotations:  <none>

Data
====
Corefile:
----
.:53 {
    errors {
    }
    health {
        lameduck 5s
    }
    ready
    kubernetes cluster.local in-addr.arpa ip6.arpa {
      pods insecure
      fallthrough in-addr.arpa ip6.arpa
    }
    prometheus :9153
    forward . /etc/resolv.conf {
      prefer_udp
      max_concurrent 1000
    }
    cache 30

    loop
    reload
    loadbalance
}

BinaryData
====

Events:  <none>

root@k8s-ctr:~/kubespray# kubectl describe cm -n kube-system dns-autoscaler
Name:         dns-autoscaler
Namespace:    kube-system
Labels:       <none>
Annotations:  <none>

Data
====
linear:
----
{"coresPerReplica":256,"min":1,"nodesPerReplica":16,"preventSinglePointFailure":false}

BinaryData
====

Events:  <none>

root@k8s-ctr:~/kubespray# tree /etc/kubernetes/addons/
/etc/kubernetes/addons/
├── metrics_server
│   ├── auth-delegator.yaml
│   ├── auth-reader.yaml
│   ├── metrics-apiservice.yaml
│   ├── metrics-server-deployment.yaml
│   ├── metrics-server-sa.yaml
│   ├── metrics-server-service.yaml
│   ├── resource-reader-clusterrolebinding.yaml
│   └── resource-reader.yaml
└── node_feature_discovery
    ├── nfd-api-crds.yaml
    ├── nfd-clusterrolebinding.yaml
    ├── nfd-clusterrole.yaml
    ├── nfd-gc.yaml
    ├── nfd-master-conf.yaml
    ├── nfd-master.yaml
    ├── nfd-ns.yaml
    ├── nfd-rolebinding.yaml
    ├── nfd-role.yaml
    ├── nfd-serviceaccount.yaml
    ├── nfd-service.yaml
    ├── nfd-topologyupdater-conf.yaml
    ├── nfd-worker-conf.yaml
    └── nfd-worker.yaml

3 directories, 22 files

root@k8s-ctr:~/kubespray# kubectl get pod -n kube-system -l app.kubernetes.io/name=metrics-server
NAME                             READY   STATUS    RESTARTS      AGE
metrics-server-7cd7f9897-f9ngp   1/1     Running   3 (86m ago)   2d4h

root@k8s-ctr:~/kubespray# kubectl top pod -A
NAMESPACE                NAME                                             CPU(cores)   MEMORY(bytes)   
default                  ubuntu                                           0m           2Mi             
kube-system              coredns-5d784884df-n4g5h                         4m           79Mi            
kube-system              dns-autoscaler-676999957f-r8xx6                  1m           42Mi            
kube-system              kube-apiserver-k8s-ctr                           43m          331Mi           
kube-system              kube-controller-manager-k8s-ctr                  16m          145Mi           
kube-system              kube-flannel-ds-arm64-n288c                      5m           64Mi            
kube-system              kube-proxy-z846p                                 1m           87Mi            
kube-system              kube-scheduler-k8s-ctr                           10m          87Mi            
kube-system              metrics-server-7cd7f9897-f9ngp                   4m           88Mi            
node-feature-discovery   node-feature-discovery-gc-6c9b8f4657-drclc       1m           48Mi            
node-feature-discovery   node-feature-discovery-master-6989794b78-gfvcx   1m           70Mi            
node-feature-discovery   node-feature-discovery-worker-q44fg              2m           55Mi     

'Study > K8S-Deploy' 카테고리의 다른 글

K8S ) 6주차 과제  (1) 2026.02.13
K8S ) 5주차 과제  (0) 2026.02.06
K8S)3주차 과제  (0) 2026.01.24
K8S)2주차 과제  (0) 2026.01.15
K8S) 1주차 과제  (0) 2026.01.08

Kubeadm deep dive

  • Kubeadm 소개

    • kubeadm init

      초기 Kubernetes 컨트롤 플레인 노드를 부트스트랩(초기 구성)하는 명령이다.

    • kubeadm join

      Kubernetes 워커 노드 또는 추가 컨트롤 플레인 노드를 부트스트랩하여

      기존 Kubernetes 클러스터에 참여시키는 명령이다.

    • kubeadm upgrade

      Kubernetes 클러스터를 더 최신 버전으로 업그레이드하는 명령이다.

    • kubeadm reset

      이 호스트에서 kubeadm init 또는 kubeadm join 실행으로 인해 적용된

      모든 변경 사항을 되돌리는(초기화하는) 명령이다.

  • Kubeadm 실습 준비 ( Vagrantfile)

    
      PS C:\Users\bom\Desktop\스터디\week3> vagrant up
      Bringing machine 'k8s-ctr' up with 'virtualbox' provider...
      Bringing machine 'k8s-w1' up with 'virtualbox' provider...
      Bringing machine 'k8s-w2' up with 'virtualbox' provider...
      ==> k8s-ctr: Box 'bento/rockylinux-10.0' could not be found. Attempting to find and install...
          k8s-ctr: Box Provider: virtualbox
          k8s-ctr: Box Version: 202510.26.0
    
          #########중략#############
    
          k8s-w2: Inserting generated public key within guest...
          k8s-w2: Removing insecure key from the guest if it's present...
          k8s-w2: Key inserted! Disconnecting and reconnecting using new SSH key...
      ==> k8s-w2: Machine booted and ready!
      ==> k8s-w2: Checking for guest additions in VM...
      ==> k8s-w2: Setting hostname...
      ==> k8s-w2: Configuring and enabling network interfaces...
  1. 공통사전설정

    • 기본정보확인

        PS C:\Users\bom\Desktop\스터디\week3> vagrant ssh k8s-ctr
      
        This system is built by the Bento project by Chef Software
        More information can be found at https://github.com/chef/bento
      
        Use of this system is acceptance of the OS vendor EULA and License Agreements.
        vagrant@k8s-ctr:~$
      
        vagrant@k8s-ctr:~$ whoami
        vagrant
        vagrant@k8s-ctr:~$ id
        uid=1000(vagrant) gid=1000(vagrant) groups=1000(vagrant) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
        vagrant@k8s-ctr:~$ pwd
        /home/vagrant
      
        vagrant@k8s-ctr:~$ rpm -aq | grep release
        rocky-release-10.0-1.6.el10.noarch
      
    • Time NTP 설정 : 인증서만료시간 , 로그 타임스탬프 등 모든 노드에 동기화된 시간이 필요 하다.

        root@k8s-ctr:~# timedatectl status
                       Local time: Wed 2026-01-21 14:11:22 UTC
                   Universal time: Wed 2026-01-21 14:11:22 UTC
                         RTC time: Wed 2026-01-21 14:11:21
                        Time zone: UTC (UTC, +0000)
        System clock synchronized: yes
                      NTP service: active
                  RTC in local TZ: yes
      
        root@k8s-ctr:~# timedatectl set-timezone Asia/Seoul
        root@k8s-ctr:~# date
        Wed Jan 21 11:11:44 PM KST 2026
      
        root@k8s-ctr:~# chronyc sources -v
      
          .-- Source mode  '^' = server, '=' = peer, '#' = local clock.
         / .- Source state '*' = current best, '+' = combined, '-' = not combined,
        | /             'x' = may be in error, '~' = too variable, '?' = unusable.
        ||                                                 .- xxxx [ yyyy ] +/- zzzz
        ||      Reachability register (octal) -.           |  xxxx = adjusted offset,
        ||      Log2(Polling interval) --.      |          |  yyyy = measured offset,
        ||                                \     |          |  zzzz = estimated error.
        ||                                 |    |           \
        MS Name/IP address         Stratum Poll Reach LastRx Last sample
        ===============================================================================
        ^* time.ravnus.com               2   6   377    34    +85us[ +280us] +/- 2917us
        ^+ ec2-3-39-176-65.ap-north>     2   6   377    36   +383us[ +576us] +/- 5118us
        ^- 121.174.142.82                3   6   377    34  +1248us[+1248us] +/-   32ms
        ^- ipv4.ntp3.rbauman.com         2   6   377    56  +1540us[+1726us] +/-   18ms
      
    • Selinux, firewalld 종료

        root@k8s-ctr:~# getenforce
        Enforcing
        root@k8s-ctr:~# sestatus
        SELinux status:                 enabled
        SELinuxfs mount:                /sys/fs/selinux
        SELinux root directory:         /etc/selinux
        Loaded policy name:             targeted
        Current mode:                   enforcing
        Mode from config file:          enforcing
        Policy MLS status:              enabled
        Policy deny_unknown status:     allowed
        Memory protection checking:     actual (secure)
        Max kernel policy version:      33
        root@k8s-ctr:~# sestatus ^C
        root@k8s-ctr:~# setenforce 0
        root@k8s-ctr:~# sealert ^C
        root@k8s-ctr:~# sestatus
        SELinux status:                 enabled
        SELinuxfs mount:                /sys/fs/selinux
        SELinux root directory:         /etc/selinux
        Loaded policy name:             targeted
        Current mode:                   permissive
        Mode from config file:          enforcing
        Policy MLS status:              enabled
        Policy deny_unknown status:     allowed
        Memory protection checking:     actual (secure)
        Max kernel policy version:      33
        root@k8s-ctr:~# cat /etc/selinux/config | grep ^SELINUX
        SELINUX=enforcing
        SELINUXTYPE=targeted
      
        root@k8s-ctr:~# sed -i 's/^SELINUX=enforcing/SELINUX=permissive/' /etc/selinux/config
        cat /etc/selinux/config | grep ^SELINUX
        SELINUX=permissive
        SELINUXTYPE=targeted
      
        root@k8s-ctr:~# systemctl status firewalld
        ○ firewalld.service - firewalld - dynamic firewall daemon
             Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; preset: enabled)
             Active: inactive (dead)
               Docs: man:firewalld(1)
      
        Jan 21 23:02:55 localhost systemd[1]: Starting firewalld.service - firewalld - dynamic firewall daemon...
        Jan 21 23:02:56 localhost systemd[1]: Started firewalld.service - firewalld - dynamic firewall daemon.
        Jan 21 23:14:39 k8s-ctr systemd[1]: Stopping firewalld.service - firewalld - dynamic firewall daemon...
        Jan 21 23:14:39 k8s-ctr systemd[1]: firewalld.service: Deactivated successfully.
        Jan 21 23:14:39 k8s-ctr systemd[1]: Stopped firewalld.service - firewalld - dynamic firewall daemon.
        Jan 21 23:14:39 k8s-ctr systemd[1]: firewalld.service: Consumed 799ms CPU time, 69.6M memory peak.
    • Swap비활성화

        root@k8s-ctr:~# lsblk
        NAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
        sda      8:0    0  64G  0 disk
        ├─sda1   8:1    0   1M  0 part
        ├─sda2   8:2    0   3G  0 part [SWAP]
        └─sda3   8:3    0  61G  0 part /
        root@k8s-ctr:~# swapoff -a
        root@k8s-ctr:~# lsblk
        NAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
        sda      8:0    0  64G  0 disk
        ├─sda1   8:1    0   1M  0 part
        ├─sda2   8:2    0   3G  0 part
        └─sda3   8:3    0  61G  0 part /
    • 커널 모듈 및 커널 파라미터 설정

        root@k8s-ctr:~# lsmod | grep -iE 'overlay|br_netfilter'
        root@k8s-ctr:~# modprobe overlay
        modprobe br_netfilter
        root@k8s-ctr:~# lsmod | grep -iE 'overlay|br_netfilter'
        br_netfilter           36864  0
        bridge                417792  1 br_netfilter
        overlay               245760  0
      
        root@k8s-ctr:~# cat <<EOF | tee /etc/sysctl.d/k8s.conf
        net.bridge.bridge-nf-call-iptables  = 1
        net.bridge.bridge-nf-call-ip6tables = 1
        net.ipv4.ip_forward                 = 1
        EOF
        net.bridge.bridge-nf-call-iptables  = 1
        net.bridge.bridge-nf-call-ip6tables = 1
        net.ipv4.ip_forward                 = 1
        root@k8s-ctr:~# tree /etc/sysctl.d/
        /etc/sysctl.d/
        ├── 99-sysctl.conf -> ../sysctl.conf
        └── k8s.conf
      
        1 directory, 2 files
        root@k8s-ctr:~# sysctl --system
        * Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
        * Applying /usr/lib/sysctl.d/10-map-count.conf ...
        * Applying /usr/lib/sysctl.d/50-coredump.conf ...
        * Applying /usr/lib/sysctl.d/50-default.conf ...
        * Applying /usr/lib/sysctl.d/50-libkcapi-optmem_max.conf ...
        * Applying /usr/lib/sysctl.d/50-pid-max.conf ...
        * Applying /usr/lib/sysctl.d/50-redhat.conf ...
        * Applying /etc/sysctl.d/99-sysctl.conf ...
        * Applying /etc/sysctl.d/k8s.conf ...
    • hosts 설정

        root@k8s-ctr:~# cat /etc/hosts
        # Loopback entries; do not change.
        # For historical reasons, localhost precedes localhost.localdomain:
        127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
        ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
        # See hosts(5) for proper format and other examples:
        # 192.168.1.10 foo.example.org foo
        # 192.168.1.13 bar.example.org bar
        192.168.10.100 k8s-ctr
        192.168.10.101 k8s-w1
        192.168.10.102 k8s-w2
  1. 공통CRI설정(Contrainerd)

    • contrainerd(runc)설치 v2.1.5

      원활한 실습을 위해 containerd 버젼은 2.1.5버젼으로 진행하도록한다.

      image.png

        root@k8s-ctr:~# dnf repolist
        repo id                                             repo name
        appstream                                           Rocky Linux 10 - AppStream
        baseos                                              Rocky Linux 10 - BaseOS
        extras                                              Rocky Linux 10 - Extras
      
        root@k8s-ctr:~# dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
        Adding repo from: https://download.docker.com/linux/centos/docker-ce.repo
        root@k8s-ctr:~# dnf repolist
        repo id                                                repo name
        appstream                                              Rocky Linux 10 - AppStream
        baseos                                                 Rocky Linux 10 - BaseOS
        docker-ce-stable                                       Docker CE Stable - x86_64
        extras                                                 Rocky Linux 10 - Extras
        root@k8s-ctr:~# dnf makecache
        Docker CE Stable - x86_64                                                               188 kB/s |  16 kB     00:00
        Rocky Linux 10 - BaseOS                                                                 8.2 MB/s | 7.6 MB     00:00
        Rocky Linux 10 - AppStream                                                              3.0 MB/s | 2.1 MB     00:00
        Rocky Linux 10 - Extras                                                                  11 kB/s | 5.9 kB     00:00
        Metadata cache created.
      
        root@k8s-ctr:~# dnf list --showduplicates containerd.io
        Last metadata expiration check: 0:00:15 ago on Wed 21 Jan 2026 11:30:36 PM KST.
        Available Packages
        containerd.io.x86_64                                  1.7.23-3.1.el10                                   docker-ce-stable
        containerd.io.x86_64                                  1.7.24-3.1.el10                                   docker-ce-stable
        containerd.io.x86_64                                  1.7.25-3.1.el10                                   docker-ce-stable
        containerd.io.x86_64                                  1.7.26-3.1.el10                                   docker-ce-stable
        containerd.io.x86_64                                  1.7.27-3.1.el10                                   docker-ce-stable
        containerd.io.x86_64                                  1.7.28-1.el10                                     docker-ce-stable
        containerd.io.x86_64                                  1.7.28-2.el10                                     docker-ce-stable
        containerd.io.x86_64                                  1.7.29-1.el10                                     docker-ce-stable
        containerd.io.x86_64                                  2.1.5-1.el10                                      docker-ce-stable
        containerd.io.x86_64                                  2.2.0-2.el10                                      docker-ce-stable
        containerd.io.x86_64                                  2.2.1-1.el10
      
        root@k8s-ctr:~# dnf install -y containerd.io-2.1.5-1.el10
        Last metadata expiration check: 0:00:28 ago on Wed 21 Jan 2026 11:30:36 PM KST.
        Dependencies resolved.
        ========================================================================================================================
         Package                      Architecture          Version                       Repository                       Size
        ========================================================================================================================
        Installing:
         containerd.io                x86_64                2.1.5-1.el10                  docker-ce-stable                 34 M
      
        Transaction Summary
        ========================================================================================================================
        Install  1 Package
      
        ######################중략################################
      
        root@k8s-ctr:~# which runc && runc --version
        /usr/bin/runc
        runc version 1.3.3
        commit: v1.3.3-0-gd842d771
        spec: 1.2.1
        go: go1.24.9
        libseccomp: 2.5.3
      
        root@k8s-ctr:~# containerd config default | tee /etc/containerd/config.toml
        version = 3
        root = '/var/lib/containerd'
        state = '/run/containerd'
        temp = ''
        disabled_plugins = []
        required_plugins = []
        oom_score = 0
        imports = []
      
        root@k8s-ctr:~# sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
        root@k8s-ctr:~# cat /etc/containerd/config.toml | grep -i systemdcgroup
                    SystemdCgroup = true
      
      
    root@k8s-ctr:~# systemctl daemon-reload
    root@k8s-ctr:~# systemctl enable --now containerd
    Created symlink '/etc/systemd/system/multi-user.target.wants/containerd.service' → '/usr/lib/systemd/system/containerd.service'.
    root@k8s-ctr:~# systemctl status containerd --no-pager
    ● containerd.service - containerd container runtime
         Loaded: loaded (/usr/lib/systemd/system/containerd.service; enabled; preset: disabled)
         Active: active (running) since Wed 2026-01-21 23:33:28 KST; 5s ago
     Invocation: ef390d0ab56144b19688127d98415f72
           Docs: https://containerd.io

     root@k8s-ctr:~# containerd config dump | grep -n containerd.sock
    11:  address = '/run/containerd/containerd.sock'
    root@k8s-ctr:~# ss -xl | grep containerd
    u_str LISTEN 0      4096        /run/containerd/containerd.sock.ttrpc 20071            * 0
    u_str LISTEN 0      4096              /run/containerd/containerd.sock 20072            * 0
    root@k8s-ctr:~# ss -xnp | grep containerd
    u_str ESTAB 0      0                                                 * 20977            * 20069 users:(("containerd",pid=5439,fd=2),("containerd",pid=5439,fd=1))                                                                                                                                                                                                       
    root@k8s-ctr:~# ctr --address /run/containerd/containerd.sock version
    Client:
      Version:  v2.1.5
      Revision: fcd43222d6b07379a4be9786bda52438f0dd16a1
      Go version: go1.24.9

    Server:
      Version:  v2.1.5
      Revision: fcd43222d6b07379a4be9786bda52438f0dd16a1
      UUID: c0182a7f-b72c-4269-ba99-f2cf345cdfdc
    ```
  1. 공통 kubeadm,kublet kubectl 설치 v1.32.11

    • kubeadm,kublet kubectl

        root@k8s-ctr:~# cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
        [kubernetes]
        name=Kubernetes
        baseurl=https://pkgs.k8s.io/core:/stable:/v1.32/rpm/
        enabled=1
        gpgcheck=1
        gpgkey=https://pkgs.k8s.io/core:/stable:/v1.32/rpm/repodata/repomd.xml.key
        exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
        EOF
        [kubernetes]
        name=Kubernetes
        baseurl=https://pkgs.k8s.io/core:/stable:/v1.32/rpm/
        enabled=1
        gpgcheck=1
        gpgkey=https://pkgs.k8s.io/core:/stable:/v1.32/rpm/repodata/repomd.xml.key
        exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
      
        root@k8s-ctr:~# dnf install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
        Last metadata expiration check: 0:00:05 ago on Wed 21 Jan 2026 11:44:29 PM KST.
        Dependencies resolved.
        ========================================================================================================================
         Package                       Architecture          Version                            Repository                 Size
        ========================================================================================================================
        Installing:
         kubeadm                       x86_64                1.32.11-150500.1.1                 kubernetes                 12 M
         kubectl                       x86_64                1.32.11-150500.1.1                 kubernetes                 11 M
         kubelet                       x86_64                1.32.11-150500.1.1                 kubernetes                 15 M
        Installing dependencies:
         cri-tools                     x86_64                1.32.0-150500.1.1                  kubernetes                7.1 M
         kubernetes-cni                x86_64                1.6.0-150500.1.1                   kubernetes                8.0 M
      
        Transaction Summary
        ========================================================================================================================
        Install  5 Packages
      
        ################중략###########################
      
        root@k8s-ctr:~# systemctl enable --now kubelet
        Created symlink '/etc/systemd/system/multi-user.target.wants/kubelet.service' → '/usr/lib/systemd/system/kubelet.service'.
        root@k8s-ctr:~# systemctl status kubelet
        ● kubelet.service - kubelet: The Kubernetes Node Agent
             Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; preset: disabled)
            Drop-In: /usr/lib/systemd/system/kubelet.service.d
                     └─10-kubeadm.conf
             Active: activating (auto-restart) (Result: exit-code) since Wed 2026-01-21 23:45:05 KST; 4s ago
         Invocation: 9290c9551a364306bdd2d324aca03c40
               Docs: https://kubernetes.io/docs/
            Process: 5703 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBEL>
           Main PID: 5703 (code=exited, status=1/FAILURE)
           Mem peak: 11.7M
                CPU: 79ms
      
        root@k8s-ctr:~# crictl info | jq
        {
          "cniconfig": {
            "Networks": [
              {
                "Config": {
                  "CNIVersion": "0.3.1",
                  "Name": "cni-loopback",
                  "Plugins": [
                    {
                      "Network": {
                        "ipam": {},
                        "type": "loopback"
                      },
                      "Source": "{\"type\":\"loopback\"}"
                    }
                  ],
                  "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
      
        ##############중략############################
      
        root@k8s-ctr:~# systemctl is-active kubelet
        activating
        root@k8s-ctr:~# systemctl status kubelet --no-pager
        ● kubelet.service - kubelet: The Kubernetes Node Agent
             Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; preset: disabled)
            Drop-In: /usr/lib/systemd/system/kubelet.service.d
                     └─10-kubeadm.conf
             Active: activating (auto-restart) (Result: exit-code) since Wed 2026-01-21 23:46:17 KST; 3s ago
         Invocation: 7ad519af585240e48db775d8ae3a190d
               Docs: https://kubernetes.io/docs/
            Process: 5778 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
           Main PID: 5778 (code=exited, status=1/FAILURE)
           Mem peak: 13.4M
                CPU: 77ms
        root@k8s-ctr:~# journalctl -u kubelet --no-pager
        Jan 21 23:45:05 k8s-ctr systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
        Jan 21 23:45:05 k8s-ctr (kubelet)[5703]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_KUBEADM_ARGS
        Jan 21 23:45:05 k8s-ctr kubelet[5703]: E0121 23:45:05.839407    5703 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
      
        root@k8s-ctr:~# ls -l /run/containerd/containerd.sock
        srw-rw----. 1 root root 0 Jan 21 23:33 /run/containerd/containerd.sock
        root@k8s-ctr:~# ss -xl | grep containerd
        u_str LISTEN 0      4096        /run/containerd/containerd.sock.ttrpc 20071            * 0
        u_str LISTEN 0      4096              /run/containerd/containerd.sock 20072            * 0
        root@k8s-ctr:~# ss -xnp | grep containerd
        u_str ESTAB 0      0                                                 * 20977            * 20069 users:(("containerd",pid=5439,fd=2),("containerd",pid=5439,fd=1))  
  2. kubeadm으로 k8s 클러스터 구성 편의성설치 (중요)

    • kubeadm init 수행

      • 사전 검사 수행: CRI 연결, root 권한, kubelet 최소 버전 충족 여부 확인

      • 보안 구성 생성: Control Plane 통신을 위한 인증서와 키를 /etc/kubernetes/pki에 생성

      • kubeconfig 생성: kubelet, controller-manager, scheduler, admin용 설정 파일 생성

      • Control Plane 구성요소 배포: kube-apiserver, controller-manager, scheduler, etcd를 Static Pod로 생성

      • kubelet 기동 및 대기: kubelet을 시작하고 API Server가 정상 상태가 될 때까지 대기

      • 클러스터 설정 저장: kubeadm ClusterConfiguration을 kubeadm-config ConfigMap에 저장

      • Control Plane 노드 지정: control-plane 라벨 부여 및 NoSchedule taint 적용

      • 부트스트랩 설정: bootstrap 토큰 생성 및 노드 조인을 위한 TLS/RBAC/cluster-info 구성

      • 필수 애드온 설치: kube-proxy(DaemonSet)와 CoreDNS 설치

        ```bash
        root@k8s-ctr:~# cat kubeadm-init.yaml
        apiVersion: kubeadm.k8s.io/v1beta4
        kind: InitConfiguration
        bootstrapTokens:

      • token: "123456.1234567890123456"
        ttl: "0s"
        usages:

        • signing
        • authentication
          nodeRegistration:
          kubeletExtraArgs:
          • name: node-ip
            value: "192.168.10.100" # 미설정 시 10.0.2.15 맵핑
            criSocket: "unix:///run/containerd/containerd.sock"
            localAPIEndpoint:
            advertiseAddress: "192.168.10.100"

      apiVersion: kubeadm.k8s.io/v1beta4
      kind: ClusterConfiguration
      kubernetesVersion: "1.32.11"
      networking:

      podSubnet: "10.244.0.0/16"
      serviceSubnet: "10.96.0.0/16"

      root@k8s-ctr:~# kubeadm init --config="kubeadm-init.yaml"
      [init] Using Kubernetes version: v1.32.11
      [preflight] Running pre-flight checks
      [preflight] Pulling images required for setting up a Kubernetes cluster
      [preflight] This might take a minute or two, depending on the speed of your internet connection
      [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
      W0121 23:56:45.057914 6337 checks.go:843] detected that the sandbox image "" of the container runtime is inconsistent with that used by kubeadm.It is recommended to use "registry.k8s.io/pause:3.10" as the CRI sandbox image.

      root@k8s-ctr:# mkdir -p /root/.kube
      root@k8s-ctr:
      # cp -i /etc/kubernetes/admin.conf /root/.kube/config
      root@k8s-ctr:# chown $(id -u):$(id -g) /root/.kube/config
      root@k8s-ctr:
      # kubectl cluster-info
      Kubernetes control plane is running at https://192.168.10.100:6443
      CoreDNS is running at https://192.168.10.100:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

      To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
      root@k8s-ctr:# kubectl get node -owide
      NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
      k8s-ctr NotReady control-plane 3m6s v1.32.11 192.168.10.100 Rocky Linux 10.0 (Red Quartz) 6.12.0-55.39.1.el10_0.x86_64 containerd://2.1.5
      root@k8s-ctr:
      # kubectl get pod -n kube-system -owide
      NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
      coredns-668d6bf9bc-n2xwv 0/1 Pending 0 3m3s
      coredns-668d6bf9bc-xsk2k 0/1 Pending 0 3m3s
      etcd-k8s-ctr 1/1 Running 0 3m9s 192.168.10.100 k8s-ctr
      kube-apiserver-k8s-ctr 1/1 Running 0 3m9s 192.168.10.100 k8s-ctr
      kube-controller-manager-k8s-ctr 1/1 Running 0 3m9s 192.168.10.100 k8s-ctr
      kube-proxy-9dpcs 1/1 Running 0 3m3s 192.168.10.100 k8s-ctr
      kube-scheduler-k8s-ctr 1/1 Running 0 3m9s 192.168.10.100 k8s-ctr

      root@k8s-ctr:# kubectl -n kube-public get configmap cluster-info
      NAME DATA AGE
      cluster-info 2 3m58s
      root@k8s-ctr:
      # kubectl -n kube-public get configmap cluster-info -o yaml
      apiVersion: v1
      data:

      jws-kubeconfig-123456: eyJhbGciOiJIUzI1NiIsImtpZCI6IjEyMzQ1NiJ9..h64eqq42z6muTTM3tEU5EEZaBcK8--j1gmg7rtEXyo0
      kubeconfig: |
        apiVersion: v1
        clusters:
        - cluster:
            certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJQkRmelpiZXFoRDR3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TmpBeE1qRXhORFV4TkRSYUZ3MHpOakF4TVRreE5EVTJORFJhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUURHUFR2QnhTcFpKaDJncGdqeXpzZWhxZTR0VnhRend5Q0VVNXVSbzJ1ZlI5N21GeUpLa2VaRjVnYU0Kd0dsSlFZT2JUb0Z2cWNaZk5ZTzZ4SWlTUHdsTEJlYzFCa3Z3d2FQWDhaV1BSVjJHVGRpODc1U2Q2azA5bjVtNgpHOUgrdFVvbU1xY0d3dlU2RWhrYUxqTHVta3pXb1FXT2F3YjB6Q01NTkpjT2MwQkFScGVKTUxXYUI3RzNjYUJhCnh3d3pBS25GMUtNdlVqb2w4aEpaSUM4bGVMWDNzNnpuSUpRSDREVC9DcUZEYnEyM1BqWlFqaENGTkQxeUdtSTgKMVcyYUttTFJJWUVPYzUrYXV6NlByZUZ3TjVCU1JWNllIY1phMzJlSXE0S1k1TGkxdXAxdURReWtjWGFUN1Bvbwp0azY1OFpvWG5rRUdFL3M3NXo2ZGY2cjRiVVJuQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJSb2FrV25YSHBwU0lwL05JMFE4eTJFOEJDQlpUQVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQ3p1ZFhjcU5FcApjSVI1YnJ0ajk1YUhCb0RtZ0NNUjYyaGlrMWdqVUtPaTR2L2tJaWVhUjRmN3hWSGhLNi8wK1M1eXpXVDA4WTVYClluenNuTllrbXQyc2lGWTlKMFFjL1U3cVlKemhleXlsdjZuMzBWMUVDN0pHMkF3WmFUekFyMUVVUEI5TmFGOGIKTFB1c1lSbWR5dnJUMjJzTUFadlZFd0dWZTgrYVFmcWNMZ1pmcW14dHh2WFhiTnRoOFozSWs2Qk9LYVBhQWFZKwpkSHltSDJLTWEvajhobEo5VG9sZFcvQURLY083Y1ZWU0NjcUY1SUFRNldLQTNwRVMzSENnTk5aa3MyUEN2R1NLCnQvelJBd2xUV0JxUWwvSmEraCtzZDQvdW1qR2JUcTZnaEx2V0tnQ09naU9kLzRZbUhFWkVqMWJuSzlmVE9EVGwKclNGSytHckVFUXZhCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
            server: https://192.168.10.100:6443
          name: ""
        contexts: null
        current-context: ""
        kind: Config
        preferences: {}
        users: null

      kind: ConfigMap
      metadata:

      creationTimestamp: "2026-01-21T14:57:19Z"
      name: cluster-info
      namespace: kube-public
      resourceVersion: "326"
      uid: 4f8df4c9-5dc4-4734-b1b7-e5a803feab7f
      
      
      
- 작업 편의성 설정

    ```bash

    root@k8s-ctr:~# alias k=kubectl
    root@k8s-ctr:~# complete -o default -F __start_kubectl k
    root@k8s-ctr:~# echo 'alias k=kubectl' >> /etc/profile
    root@k8s-ctr:~# echo 'complete -o default -F __start_kubectl k' >> /etc/profile
    root@k8s-ctr:~# k get node
    NAME      STATUS     ROLES           AGE     VERSION
    k8s-ctr   NotReady   control-plane   6m26s   v1.32.11
    root@k8s-ctr:~# dnf install -y 'dnf-command(config-manager)'
    Last metadata expiration check: 0:19:06 ago on Wed 21 Jan 2026 11:44:43 PM KST.
    Package dnf-plugins-core-4.7.0-8.el10.noarch is already installed.
    Dependencies resolved.
    ========================================================================================================================
     Package                                 Architecture          Version                      Repository             Size
    ========================================================================================================================
    Upgrading:
     dnf-plugins-core                        noarch                4.7.0-9.el10                 baseos                 43 k
     python3-dnf-plugins-core                noarch                4.7.0-9.el10                 baseos                315 k
     yum-utils                               noarch                4.7.0-9.el10                 baseos                 34 k

    Transaction Summary
    ========================================================================================================================
    Upgrade  3 Packages

    Total download size: 392 k
    Downloading Packages:
    (1/3): dnf-plugins-core-4.7.0-9.el10.noarch.rpm                                         1.2 MB/s |  43 kB     00:00
    (2/3): yum-utils-4.7.0-9.el10.noarch.rpm                                                869 kB/s |  34 kB     00:00
    (3/3): python3-dnf-plugins-core-4.7.0-9.el10.noarch.rpm                                 6.0 MB/s | 315 kB     00:00
    ------------------------------------------------------------------------------------------------------------------------
    Total                                                                                   729 kB/s | 392 kB     00:00
    Running transaction check
    Transaction check succeeded.
    Running transaction test
    Transaction test succeeded.
    Running transaction
      Preparing        :                                                                                                1/1
      Upgrading        : python3-dnf-plugins-core-4.7.0-9.el10.noarch                                                   1/6
      Upgrading        : dnf-plugins-core-4.7.0-9.el10.noarch                                                           2/6
      Upgrading        : yum-utils-4.7.0-9.el10.noarch                                                                  3/6
      Cleanup          : yum-utils-4.7.0-8.el10.noarch                                                                  4/6
      Cleanup          : dnf-plugins-core-4.7.0-8.el10.noarch                                                           5/6
      Cleanup          : python3-dnf-plugins-core-4.7.0-8.el10.noarch                                                   6/6
      Running scriptlet: python3-dnf-plugins-core-4.7.0-8.el10.noarch                                                   6/6

    Upgraded:
      dnf-plugins-core-4.7.0-9.el10.noarch   python3-dnf-plugins-core-4.7.0-9.el10.noarch   yum-utils-4.7.0-9.el10.noarch

    Complete!
    root@k8s-ctr:~# dnf config-manager --add-repo https://kubecolor.github.io/packages/rpm/kubecolor.repo
    Adding repo from: https://kubecolor.github.io/packages/rpm/kubecolor.repo
    root@k8s-ctr:~# dnf repolist
    repo id                                                repo name
    appstream                                              Rocky Linux 10 - AppStream
    baseos                                                 Rocky Linux 10 - BaseOS
    docker-ce-stable                                       Docker CE Stable - x86_64
    extras                                                 Rocky Linux 10 - Extras
    kubecolor                                              packages for kubecolor
    kubernetes                                             Kubernetes
    root@k8s-ctr:~# dnf install -y kubecolor
    packages for kubecolor                                                                   18 kB/s | 949  B     00:00
    Dependencies resolved.
    ========================================================================================================================
     Package                      Architecture              Version                      Repository                    Size
    ========================================================================================================================
    Installing:
     kubecolor                    x86_64                    0.5.3-1                      kubecolor                    2.6 M

    Transaction Summary
    ========================================================================================================================
    Install  1 Package

    Total download size: 2.6 M
    Installed size: 5.9 M
    Downloading Packages:
    kubecolor_0.5.3_linux_amd64.rpm                                                         8.2 MB/s | 2.6 MB     00:00
    ------------------------------------------------------------------------------------------------------------------------
    Total                                                                                   8.1 MB/s | 2.6 MB     00:00
    ##########중략#######

    root@k8s-ctr:~# kubecolor get node
    NAME      STATUS     ROLES           AGE     VERSION
    k8s-ctr   NotReady   control-plane   6m44s   v1.32.11
    root@k8s-ctr:~# alias kc=kubecolor
    root@k8s-ctr:~# echo 'alias kc=kubecolor' >> /etc/profile
    root@k8s-ctr:~# kc get node
    NAME      STATUS     ROLES           AGE     VERSION
    k8s-ctr   NotReady   control-plane   6m52s   v1.32.11
    root@k8s-ctr:~# kc describe node
    Name:               k8s-ctr
    Roles:              control-plane
    Labels:             beta.kubernetes.io/arch=amd64
                        beta.kubernetes.io/os=linux
                        kubernetes.io/arch=amd64
                        kubernetes.io/hostname=k8s-ctr
                        kubernetes.io/os=linux
                        node-role.kubernetes.io/control-plane=
                        node.kubernetes.io/exclude-from-external-load-balancers=
    Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
                        node.alpha.kubernetes.io/ttl: 0
                        volumes.kubernetes.io/controller-managed-attach-detach: true
    CreationTimestamp:  Wed, 21 Jan 2026 23:57:17 +0900
    Taints:             node-role.kubernetes.io/control-plane:NoSchedule
                        node.kubernetes.io/not-ready:NoSchedule
    Unschedulable:      false
    Lease:
      HolderIdentity:  k8s-ctr
      AcquireTime:     <unset>
      RenewTime:       Thu, 22 Jan 2026 00:04:11 +0900
    Conditions:
      Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
      ----             ------  -----------------                 ------------------                ------                       -------
      MemoryPressure   False   Thu, 22 Jan 2026 00:02:29 +0900   Wed, 21 Jan 2026 23:57:15 +0900   KubeletHasSufficientMemory   kubelet has sufficient memory available
      DiskPressure     False   Thu, 22 Jan 2026 00:02:29 +0900   Wed, 21 Jan 2026 23:57:15 +0900   KubeletHasNoDiskPressure     kubelet has no disk pressure
      PIDPressure      False   Thu, 22 Jan 2026 00:02:29 +0900   Wed, 21 Jan 2026 23:57:15 +0900   KubeletHasSufficientPID      kubelet has sufficient PID available
      Ready            False   Thu, 22 Jan 2026 00:02:29 +0900   Wed, 21 Jan 2026 23:57:15 +0900   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
    Addresses:
      InternalIP:  192.168.10.100
      Hostname:    k8s-ctr
    Capacity:
      cpu:                4
      ephemeral-storage:  62374Mi
      hugepages-2Mi:      0
      memory:             3036932Ki
      pods:               110
    Allocatable:
      cpu:                4
      ephemeral-storage:  58863491385
      hugepages-2Mi:      0
      memory:             2934532Ki
      pods:               110
    System Info:
      Machine ID:                 fc9f882274fc4318b555010115a384ff
      System UUID:                f29c335e-dc4d-504d-b371-d8d01bebc7f7
      Boot ID:                    eb7b009f-47d6-4f8d-a944-97839286ddce
      Kernel Version:             6.12.0-55.39.1.el10_0.x86_64
      OS Image:                   Rocky Linux 10.0 (Red Quartz)
      Operating System:           linux
      Architecture:               amd64
      Container Runtime Version:  containerd://2.1.5
      Kubelet Version:            v1.32.11
      Kube-Proxy Version:         v1.32.11
    PodCIDR:                      10.244.0.0/24
    PodCIDRs:                     10.244.0.0/24
    Non-terminated Pods:          (5 in total)
      Namespace                   Name                               CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
      ---------                   ----                               ------------  ----------  ---------------  -------------  ---
      kube-system                 etcd-k8s-ctr                       100m (2%)     0 (0%)      100Mi (3%)       0 (0%)         6m51s
      kube-system                 kube-apiserver-k8s-ctr             250m (6%)     0 (0%)      0 (0%)           0 (0%)         6m51s
      kube-system                 kube-controller-manager-k8s-ctr    200m (5%)     0 (0%)      0 (0%)           0 (0%)         6m51s
      kube-system                 kube-proxy-9dpcs                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m45s
      kube-system                 kube-scheduler-k8s-ctr             100m (2%)     0 (0%)      0 (0%)           0 (0%)         6m51s
    Allocated resources:
      (Total limits may be over 100 percent, i.e., overcommitted.)
      Resource           Requests    Limits
      --------           --------    ------
      cpu                650m (16%)  0 (0%)
      memory             100Mi (3%)  0 (0%)
      ephemeral-storage  0 (0%)      0 (0%)
      hugepages-2Mi      0 (0%)      0 (0%)
    Events:
      Type     Reason                   Age    From             Message
      ----     ------                   ----   ----             -------
      Normal   Starting                 6m44s  kube-proxy
      Normal   Starting                 6m51s  kubelet          Starting kubelet.
      Warning  InvalidDiskCapacity      6m51s  kubelet          invalid capacity 0 on image filesystem
      Normal   NodeAllocatableEnforced  6m51s  kubelet          Updated Node Allocatable limit across pods
      Normal   NodeHasSufficientMemory  6m51s  kubelet          Node k8s-ctr status is now: NodeHasSufficientMemory
      Normal   NodeHasNoDiskPressure    6m51s  kubelet          Node k8s-ctr status is now: NodeHasNoDiskPressure
      Normal   NodeHasSufficientPID     6m51s  kubelet          Node k8s-ctr status is now: NodeHasSufficientPID
      Normal   RegisteredNode           6m46s  node-controller  Node k8s-ctr event: Registered Node k8s-ctr in Controller
    root@k8s-ctr:~# dnf install -y git
    Last metadata expiration check: 0:00:18 ago on Thu 22 Jan 2026 12:03:57 AM KST.
    Dependencies resolved.
    ========================================================================================================================
     Package                         Architecture          Version                           Repository                Size
    ========================================================================================================================
    Installing:
     git                             x86_64                2.47.3-1.el10                     appstream                 50 k
    Installing dependencies:
     git-core                        x86_64                2.47.3-1.el10                     appstream                4.8 M
     git-core-doc                    noarch                2.47.3-1.el10                     appstream                3.1 M
     perl-Error                      noarch                1:0.17029-18.el10                 appstream                 40 k
     perl-File-Find                  noarch                1.44-512.2.el10_0                 appstream                 25 k
     perl-Git                        noarch                2.47.3-1.el10                     appstream                 37 k
     perl-TermReadKey                x86_64                2.38-24.el10                      appstream                 36 k
     perl-lib                        x86_64                0.65-512.2.el10_0                 appstream                 15 k

    Transaction Summary
    ========================================================================================================================
    Install  8 Packages

    ###################중략 #############################

    root@k8s-ctr:~# git clone https://github.com/ahmetb/kubectx /opt/kubectx
    Cloning into '/opt/kubectx'...
    remote: Enumerating objects: 1540, done.
    remote: Counting objects: 100% (469/469), done.
    remote: Compressing objects: 100% (110/110), done.
    remote: Total 1540 (delta 407), reused 360 (delta 359), pack-reused 1071 (from 2)
    #######################중략 ####################

    root@k8s-ctr:~# cat << "EOT" >> /root/.bash_profile
    source /root/kube-ps1/kube-ps1.sh
    KUBE_PS1_SYMBOL_ENABLE=true
    function get_cluster_short() {
      echo "$1" | cut -d . -f1
    }
    KUBE_PS1_CLUSTER_FUNCTION=get_cluster_short
    KUBE_PS1_SUFFIX=') '
    PS1='$(kube_ps1)'$PS1
    EOT
    ```

    ![image.png](attachment:f67d0bbe-b15c-4937-8fc0-b7a7bc377d9e:image.png)

- Flannel CNI 설치 ( flannel인터페이스 반드시확인)

    ```bash
    (⎈|HomeLab:default) root@k8s-ctr:~# kc describe pod -n kube-system kube-controller-manager-k8s-ctr
    Name:                 kube-controller-manager-k8s-ctr
    Namespace:            kube-system
    Priority:             2000001000
    Priority Class Name:  system-node-critical
    Node:                 k8s-ctr/192.168.10.100
    Start Time:           Wed, 21 Jan 2026 23:57:20 +0900
    Labels:               component=kube-controller-manager
                          tier=control-plane
    Annotations:          kubernetes.io/config.hash: 7314ab3f0ec6401c196ca943fad44a05
                          kubernetes.io/config.mirror: 7314ab3f0ec6401c196ca943fad44a05
                          kubernetes.io/config.seen: 2026-01-21T23:57:20.682508931+09:00
                          kubernetes.io/config.source: file
    Status:               Running
    SeccompProfile:       RuntimeDefault
    IP:                   192.168.10.100
    IPs:
      IP:           192.168.10.100
    Controlled By:  Node/k8s-ctr

    (⎈|HomeLab:default) root@k8s-ctr:~# helm repo add flannel https://flannel-io.github.io/flannel
    "flannel" has been added to your repositories
    (⎈|HomeLab:default) root@k8s-ctr:~#
    (⎈|HomeLab:default) root@k8s-ctr:~# kubectl create namespace kube-flannel
    namespace/kube-flannel created
    (⎈|HomeLab:default) root@k8s-ctr:~# cat << EOF > flannel.yaml
    podCidr: "10.244.0.0/16"
    flannel:
      cniBinDir: "/opt/cni/bin"
      cniConfDir: "/etc/cni/net.d"
      args:
      - "--ip-masq"
      - "--kube-subnet-mgr"
      **- "--iface=enp0s9"**
      backend: "vxlan"
    EOF
    (⎈|HomeLab:default) root@k8s-ctr:~# helm install flannel flannel/flannel --namespace kube-flannel --version 0.27.3 -f flannel.yaml
    NAME: flannel
    LAST DEPLOYED: Thu Jan 22 00:14:31 2026
    NAMESPACE: kube-flannel
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None

    ########### **"--iface=enp0s9"가 아닌 enp0s8로 되어있어서 재구동진행 ########**
    (⎈|HomeLab:default) root@k8s-ctr:~# helm upgrade flannel flannel/flannel   -n kube-flannel   -f flannel.yaml
    Release "flannel" has been upgraded. Happy Helming!
    NAME: flannel
    LAST DEPLOYED: Thu Jan 22 00:24:14 2026
    NAMESPACE: kube-flannel
    STATUS: deployed
    REVISION: 2
    TEST SUITE: None

    (⎈|HomeLab:default) root@k8s-ctr:~# kubectl get pod -n kube-system -owide
    NAME                              READY   STATUS    RESTARTS   AGE   IP               NODE      NOMINATED NODE   READINESS GATES
    coredns-668d6bf9bc-n2xwv          1/1     Running   0          28m   10.244.0.3       k8s-ctr   <none>           <none>
    coredns-668d6bf9bc-xsk2k          1/1     Running   0          28m   10.244.0.2       k8s-ctr   <none>           <none>
    ```

- 노드 정보 확인 기본 환경 정보 출력

    kubelet,kubeadm 설치시 커널파라미터가 변경되는게있따

    ex) kernnel.panic = 0 > 10 변경

    ```bash
    (⎈|HomeLab:default) root@k8s-ctr:~# systemctl is-active kubelet
    active
    (⎈|HomeLab:default) root@k8s-ctr:~# kc describe node
    Name:               k8s-ctr
    Roles:              control-plane
    Labels:             beta.kubernetes.io/arch=amd64
                        beta.kubernetes.io/os=linux
                        kubernetes.io/arch=amd64
                        kubernetes.io/hostname=k8s-ctr
                        kubernetes.io/os=linux
                        node-role.kubernetes.io/control-plane=
                        node.kubernetes.io/exclude-from-external-load-balancers=
    Annotations:        flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"3a:76:87:a6:2d:bf"}
                        flannel.alpha.coreos.com/backend-type: vxlan
                        flannel.alpha.coreos.com/kube-subnet-manager: true
                        flannel.alpha.coreos.com/public-ip: 192.168.10.100
                        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
                        node.alpha.kubernetes.io/ttl: 0
                        volumes.kubernetes.io/controller-managed-attach-detach: true
    CreationTimestamp:  Wed, 21 Jan 2026 23:57:17 +0900
    **Taints:             node-role.kubernetes.io/control-plane:NoSchedule**

    (⎈|HomeLab:default) root@k8s-ctr:~# cat /etc/sysconfig/kubelet
    tree /etc/kubernetes  | tee -a etc_kubernetes-2.txt
    tree /var/lib/kubelet | tee -a var_lib_kubelet-2.txt
    tree /run/containerd/ -L 3 | tee -a run_containerd-2.txt
    pstree -alnp | tee -a pstree-2.txt
    systemd-cgls --no-pager | tee -a systemd-cgls-2.txt
    lsns | tee -a lsns-2.txt
    ip addr | tee -a ip_addr-2.txt
    ss -tnlp | tee -a ss-2.txt
    df -hT | tee -a df-2.txt
    findmnt | tee -a findmnt-2.txt
    sysctl -a | tee -a sysctl-2.txt
    ```

- 인증서 확인

    ```bash
    (⎈|HomeLab:default) root@k8s-ctr:~# kc describe cm -n kube-system kubeadm-config
    Name:         kubeadm-config
    Namespace:    kube-system
    Labels:       <none>
    Annotations:  <none>

    Data
    ====
    ClusterConfiguration:
    ----
    apiServer: {}
    apiVersion: kubeadm.k8s.io/v1beta4
    **caCertificateValidityPeriod: 87600h0m0s
    certificateValidityPeriod: 8760h0m0s**
    certificatesDir: /etc/kubernetes/pki
    clusterName: kubernetes
    controllerManager: {}
    dns: {}
    encryptionAlgorithm: RSA-2048
    etcd:
      local:
        dataDir: /var/lib/etcd
    imageRepository: registry.k8s.io
    kind: ClusterConfiguration
    kubernetesVersion: v1.32.11
    networking:
      dnsDomain: cluster.local
      podSubnet: 10.244.0.0/16
      serviceSubnet: 10.96.0.0/16
    proxy: {}
    scheduler: {}

    BinaryData
    ====

    (⎈|HomeLab:default) root@k8s-ctr:~# kubeadm certs check-expiration
    [check-expiration] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
    [check-expiration] Use 'kubeadm init phase upload-config --config your-config.yaml' to re-upload it.

    CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
    admin.conf                 Jan 21, 2027 14:56 UTC   364d            ca                      no
    apiserver                  Jan 21, 2027 14:56 UTC   364d            ca                      no
    apiserver-etcd-client      Jan 21, 2027 14:56 UTC   364d            etcd-ca                 no
    apiserver-kubelet-client   Jan 21, 2027 14:56 UTC   364d            ca                      no
    controller-manager.conf    Jan 21, 2027 14:56 UTC   364d            ca                      no
    etcd-healthcheck-client    Jan 21, 2027 14:56 UTC   364d            etcd-ca                 no
    etcd-peer                  Jan 21, 2027 14:56 UTC   364d            etcd-ca                 no
    etcd-server                Jan 21, 2027 14:56 UTC   364d            etcd-ca                 no
    front-proxy-client         Jan 21, 2027 14:56 UTC   364d            front-proxy-ca          no
    scheduler.conf             Jan 21, 2027 14:56 UTC   364d            ca                      no
    super-admin.conf           Jan 21, 2027 14:56 UTC   364d            ca                      no

    CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
    ca                      Jan 19, 2036 14:56 UTC   9y              no
    etcd-ca                 Jan 19, 2036 14:56 UTC   9y              no
    front-proxy-ca          Jan 19, 2036 14:56 UTC   9y              no
    ```

- kubeconfig 확인

    ```bash
    (⎈|HomeLab:default) root@k8s-ctr:~# cat /etc/kubernetes/admin.conf
    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority-data:
    ##############중략###################    
    (⎈|HomeLab:default) root@k8s-ctr:~# cat /etc/kubernetes/super-admin.conf
    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority-data:
    ##############중략###################     
    (⎈|HomeLab:default) root@k8s-ctr:~# cat /etc/kubernetes/controller-manager.conf
    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority-data:
    ##############중략###################  

    (⎈|HomeLab:default) root@k8s-ctr:~# cat /var/lib/kubelet/pki/kubelet.crt | openssl x509 -text -noout
    Certificate:
        Data:
            Version: 3 (0x2)
            Serial Number: 4215227672604660729 (0x3a7f7f3c28ff53f9)
            Signature Algorithm: sha256WithRSAEncryption
            Issuer: CN=k8s-ctr-ca@1769007433
            Validity
                Not Before: Jan 21 13:57:13 2026 GMT
                Not After : Jan 21 13:57:13 2027 GMT
            Subject: CN=k8s-ctr@1769007433
            Subject Public Key Info:
                Public Key Algorithm: rsaEncryption
                    Public-Key: (2048 bit)
                    Modulus:

     (⎈|HomeLab:default) root@k8s-ctr:~# cat /var/lib/kubelet/pki/kubelet-client-current.pem | openssl x509 -text -noout
    Certificate:
        Data:
            Version: 3 (0x2)
            Serial Number: 5381870817680008066 (0x4ab03efe8adba382)
            Signature Algorithm: sha256WithRSAEncryption
            Issuer: CN=kubernetes
            Validity
                Not Before: Jan 21 14:51:44 2026 GMT
                Not After : Jan 21 14:56:44 2027 GMT
            Subject: O=system:nodes, CN=system:node:k8s-ctr
            Subject Public Key Info:
                Public Key Algorithm: rsaEncryption
    ```

- static pod 확인 : etcd, kube-apiserver, kube-scheduler,kube-controller-manager

    ```bash
    (⎈|HomeLab:default) root@k8s-ctr:~# tree /etc/kubernetes/manifests/
    /etc/kubernetes/manifests/
    ├── etcd.yaml
    ├── kube-apiserver.yaml
    ├── kube-controller-manager.yaml
    └── kube-scheduler.yaml

    (⎈|HomeLab:default) root@k8s-ctr:~# cat /var/lib/kubelet/config.yaml
    apiVersion: kubelet.config.k8s.io/v1beta1
    authentication:
      anonymous:
        enabled: false
      webhook:
        cacheTTL: 0s
        enabled: true
      x509:
        clientCAFile: /etc/kubernetes/pki/ca.crt

    (⎈|HomeLab:default) root@k8s-ctr:~# cat /var/lib/kubelet/kubeadm-flags.env
    KUBELET_KUBEADM_ARGS="--container-runtime-endpoint=unix:///run/containerd/containerd.sock --node-ip=192.168.10.100 --pod-infra-container-image=registry.k8s.io/pause:3.10"

    (⎈|HomeLab:default) root@k8s-ctr:~# cat /etc/kubernetes/manifests/etcd.yaml
    apiVersion: v1
    kind: Pod
    metadata:
      annotations:
        kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.10.100:2379
      creationTimestamp: null
      labels:
        component: etcd
        tier: control-plane
      name: etcd
      namespace: kube-system
    spec:
      containers:
      - command:
        - etcd
        - --advertise-client-urls=https://192.168.10.100:2379
        - --cert-file=/etc/kubernetes/pki/etcd/server.crt

     (⎈|HomeLab:default) root@k8s-ctr:~# kubectl get svc,ep
    NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
    service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   41m

    NAME                   ENDPOINTS             AGE
    endpoints/kubernetes   192.168.10.100:6443   41m

    (⎈|HomeLab:default) root@k8s-ctr:~# kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.podCIDR}{"\n"}{end}'
    k8s-ctr 10.244.0.0/24
    ```

- 필수 애드온 설치 확인

    ```bash
    (⎈|HomeLab:default) root@k8s-ctr:~# kubectl get deploy -n kube-system coredns -owide
    NAME      READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                                    SELECTOR
    coredns   2/2     2            2           41m   coredns      registry.k8s.io/coredns/coredns:v1.11.3   k8s-app=kube-dns
    (⎈|HomeLab:default) root@k8s-ctr:~# kubectl get pod -n kube-system -l k8s-app=kube-dns -owide
    NAME                       READY   STATUS    RESTARTS   AGE   IP           NODE      NOMINATED NODE   READINESS GATES
    coredns-668d6bf9bc-n2xwv   1/1     Running   0          41m   10.244.0.3   k8s-ctr   <none>           <none>
    coredns-668d6bf9bc-xsk2k   1/1     Running   0          41m   10.244.0.2   k8s-ctr   <none>           <none>
    (⎈|HomeLab:default) root@k8s-ctr:~# kubectl get svc,ep -n kube-system
    NAME               TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
    service/kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   41m

    NAME                 ENDPOINTS                                               AGE
    endpoints/kube-dns   10.244.0.2:53,10.244.0.3:53,10.244.0.2:53 + 3 more...   41m
    (⎈|HomeLab:default) root@k8s-ctr:~# curl -s http://10.96.0.10:9153/metrics | head
    # HELP coredns_build_info A metric with a constant '1' value labeled by version, revision, and goversion from which CoreDNS was built.
    # TYPE coredns_build_info gauge
    coredns_build_info{goversion="go1.21.11",revision="a6338e9",version="1.11.3"} 1
    # HELP coredns_cache_entries The number of elements in the cache.
    # TYPE coredns_cache_entries gauge
    coredns_cache_entries{server="dns://:53",type="denial",view="",zones="."} 1
    coredns_cache_entries{server="dns://:53",type="success",view="",zones="."} 0
    # HELP coredns_cache_misses_total The count of cache misses. Deprecated, derive misses from cache hits/requests counters.
    # TYPE coredns_cache_misses_total counter
    coredns_cache_misses_total{server="dns://:53",view="",zones="."} 1

    (⎈|HomeLab:default) root@k8s-ctr:~# kc describe cm -n kube-system coredns
    Name:         coredns
    Namespace:    kube-system
    Labels:       <none>
    Annotations:  <none>

    Events:  <none>

    (⎈|HomeLab:default) root@k8s-ctr:~# cat /etc/resolv.conf
    # Generated by NetworkManager
    nameserver 8.8.8.8
    (⎈|HomeLab:default) root@k8s-ctr:~# kubectl get ds -n kube-system -owide
    NAME         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE   CONTAINERS   IMAGES                                SELECTOR
    kube-proxy   1         1         1       1            1           kubernetes.io/os=linux   43m   kube-proxy   registry.k8s.io/kube-proxy:v1.32.11   k8s-app=kube-proxy
    (⎈|HomeLab:default) root@k8s-ctr:~# kubectl get pod -n kube-system -l k8s-app=kube-proxy -owide
    NAME               READY   STATUS    RESTARTS   AGE   IP               NODE      NOMINATED NODE   READINESS GATES
    kube-proxy-9dpcs   1/1     Running   0          42m   192.168.10.100   k8s-ctr   <none>           <none>
    (⎈|HomeLab:default) root@k8s-ctr:~# kc describe cm -n kube-system kube-proxy
    Name:         kube-proxy
    Namespace:    kube-system
    Labels:       app=kube-proxy
    Annotations:  kubeadm.kubernetes.io/component-config.hash: sha256:cdf765c8ace05d9c91a233c33ad96de755530f97919a928be185843e99db7bd7

    Data
    ====
    config.conf:
    ----
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    bindAddress: 0.0.0.0
    bindAddressHardFail: false
    clientConnection:
      acceptContentTypes: ""
      burst: 0
      contentType: ""
      kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
      qps: 0

    ====

    Events:  <none>
    (⎈|HomeLab:default) root@k8s-ctr:~# curl 127.0.0.1:10249/healthz ; echo
    ok

    (⎈|HomeLab:default) root@k8s-ctr:~# dnf install -y conntrack-tools
    Last metadata expiration check: 0:37:12 ago on Thu 22 Jan 2026 12:03:57 AM KST.
    Dependencies resolved.
    ========================================================================================================================
     Package                              Architecture         Version                        Repository               Size
    ========================================================================================================================
    Installing:
     conntrack-tools                      x86_64               1.4.8-3.el10                   appstream               235 k
    Installing dependencies:
     libnetfilter_cthelper                x86_64               1.0.1-1.el10                   appstream                23 k
     libnetfilter_cttimeout               x86_64               1.0.0-27.el10                  appstream                23 k
     libnetfilter_queue                   x86_64               1.0.5-9.el10                   appstream                28 k

    Transaction Summary
    ========================================================================================================================
    Install  4 Packages

    ######중략#######                                                    
    ```
  1. k8s-w1,w2 설정

    • 사전설정

        접속후 동일하게 세팅진행
        PS C:\Users\bom\Desktop\스터디\week3> vagrant ssh k8s-w1
      
        This system is built by the Bento project by Chef Software
        More information can be found at https://github.com/chef/bento
      
        Use of this system is acceptance of the OS vendor EULA and License Agreements.
        vagrant@k8s-w1:~$ echo "sudo su -" >> /home/vagrant/.bashrc
        vagrant@k8s-w1:~$ sudo su -
        root@k8s-w1:~# timedatectl set-local-rtc 0
        root@k8s-w1:~# timedatectl set-timezone Asia/Seoul
        root@k8s-w1:~# setenforce 0
        root@k8s-w1:~# sed -i 's/^SELINUX=enforcing/SELINUX=permissive/' /etc/selinux/config
        root@k8s-w1:~# systemctl disable --now firewalld
        Removed '/etc/systemd/system/multi-user.target.wants/firewalld.service'.
        Removed '/etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service'.
        root@k8s-w1:~# swapoff -a
        root@k8s-w1:~# sed -i '/swap/d' /etc/fstab
        root@k8s-w1:~# modprobe overlay
        root@k8s-w1:~# modprobe br_netfilter
        root@k8s-w1:~# cat <<EOF | tee /etc/modules-load.d/k8s.conf
        overlay
        br_netfilter
        EOF
        overlay
        br_netfilter
        root@k8s-w1:~# cat <<EOF | tee /etc/sysctl.d/k8s.conf
        net.bridge.bridge-nf-call-iptables  = 1
        net.bridge.bridge-nf-call-ip6tables = 1
        net.ipv4.ip_forward                 = 1
        EOF
        net.bridge.bridge-nf-call-iptables  = 1
        net.bridge.bridge-nf-call-ip6tables = 1
        net.ipv4.ip_forward                 = 1
        root@k8s-w1:~# sysctl --system >/dev/null 2>&1
        root@k8s-w1:~# sed -i '/^127\.0\.\(1\|2\)\.1/d' /etc/hosts
        cat << EOF >> /etc/hosts
        192.168.10.100 k8s-ctr
        192.168.10.101 k8s-w1
        192.168.10.102 k8s-w2
        EOF
    • CRI설치

        root@k8s-w2:~# dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
        Adding repo from: https://download.docker.com/linux/centos/docker-ce.repo
        root@k8s-w2:~# dnf install -y containerd.io-2.1.5-1.el10
        Docker CE Stable - x86_64                                                               282 kB/s |  16 kB     00:00
        Dependencies resolved.
        ========================================================================================================================
         Package                      Architecture          Version                       Repository                       Size
        ========================================================================================================================
        Installing:
         containerd.io                x86_64                2.1.5-1.el10                  docker-ce-stable                 34 M
      
        Transaction Summary
        ========================================================================================================================
        Install  1 Package
      
        root@k8s-w2:~# containerd config default | tee /etc/containerd/config.toml
        version = 3
        root = '/var/lib/containerd'
        state = '/run/containerd'
        temp = ''
        disabled_plugins = []
        required_plugins = []
        oom_score = 0
        imports = []
        ###########중략###########
      
        root@k8s-w2:~# sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
        root@k8s-w2:~# systemctl daemon-reload
        root@k8s-w2:~# systemctl enable --now containerd
        Created symlink '/etc/systemd/system/multi-user.target.wants/containerd.service' → '/usr/lib/systemd/system/containerd.service'.
      

      -

    • kubeadm, kubelet 및 kubectl 설치

        root@k8s-w2:~# cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
        [kubernetes]
        name=Kubernetes
        baseurl=https://pkgs.k8s.io/core:/stable:/v1.32/rpm/
        enabled=1
        gpgcheck=1
        gpgkey=https://pkgs.k8s.io/core:/stable:/v1.32/rpm/repodata/repomd.xml.key
        exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
        EOF
        [kubernetes]
        name=Kubernetes
        baseurl=https://pkgs.k8s.io/core:/stable:/v1.32/rpm/
        enabled=1
        gpgcheck=1
        gpgkey=https://pkgs.k8s.io/core:/stable:/v1.32/rpm/repodata/repomd.xml.key
        exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
        root@k8s-w2:~# dnf install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
        Kubernetes                                                                               38 kB/s |  19 kB     00:00
        Dependencies resolved.
        ========================================================================================================================
         Package                       Architecture          Version                            Repository                 Size
        ========================================================================================================================
        Installing:
         kubeadm                       x86_64                1.32.11-150500.1.1                 kubernetes                 12 M
         kubectl                       x86_64                1.32.11-150500.1.1                 kubernetes                 11 M
         kubelet                       x86_64                1.32.11-150500.1.1                 kubernetes                 15 M
        Installing dependencies:
         cri-tools                     x86_64                1.32.0-150500.1.1                  kubernetes                7.1 M
         kubernetes-cni                x86_64                1.6.0-150500.1.1                   kubernetes                8.0 M
      
        Transaction Summary
        ========================================================================================================================
        Install  5 Packages
      
        root@k8s-w2:~# systemctl enable --now kubelet
        Created symlink '/etc/systemd/system/multi-user.target.wants/kubelet.service' → '/usr/lib/systemd/system/kubelet.service'.
        root@k8s-w2:~# cat << EOF > /etc/crictl.yaml
        runtime-endpoint: unix:///run/containerd/containerd.sock
        image-endpoint: unix:///run/containerd/containerd.sock
        EOF
    • kubeadm k8s join

        root@k8s-w2:~# crictl images
        crictl ps
        cat /etc/sysconfig/kubelet
        tree /etc/kubernetes  | tee -a etc_kubernetes-1.txt
        tree /var/lib/kubelet | tee -a var_lib_kubelet-1.txt
        tree /run/containerd/ -L 3 | tee -a run_containerd-1.txt
        pstree -alnp | tee -a pstree-1.txt
        systemd-cgls --no-pager | tee -a systemd-cgls-1.txt
        lsns | tee -a lsns-1.txt
        ip addr | tee -a ip_addr-1.txt
        ss -tnlp | tee -a ss-1.txt
        df -hT | tee -a df-1.txt
        findmnt | tee -a findmnt-1.txt
        sysctl -a | tee -a sysctl-1.txt
      
        root@k8s-w2:~# NODEIP=$(ip -4 addr show enp0s8 | grep -oP '(?<=inet\s)\d+(\.\d+){3}')
      
        root@k8s-w2:~# NODEIP=$(ip -4 addr show enp0s8 | grep -oP '(?<=inet\s)\d+(\.\d+){3}')
        root@k8s-w2:~# cat << EOF > kubeadm-join.yaml
        apiVersion: kubeadm.k8s.io/v1beta4
        kind: JoinConfiguration
        discovery:
          bootstrapToken:
            token: "123456.1234567890123456"
            apiServerEndpoint: "192.168.10.100:6443"
            unsafeSkipCAVerification: true
        nodeRegistration:
          criSocket: "unix:///run/containerd/containerd.sock"
          kubeletExtraArgs:
            - name: node-ip
              value: "$NODEIP"
        EOF
      
        root@k8s-w2:~# kubeadm join --config="kubeadm-join.yaml"
        [preflight] Running pre-flight checks
        [preflight] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
        [preflight] Use 'kubeadm init phase upload-config --config your-config.yaml' to re-upload it.
        [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
        [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
        [kubelet-start] Starting the kubelet
        [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
        [kubelet-check] The kubelet is healthy after 505.002436ms
        [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
      
        This node has joined the cluster:
        * Certificate signing request was sent to apiserver and a response was received.
        * The Kubelet was informed of the new secure connection details.
      
        Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
      
        root@k8s-w2:~# curl -s -k https://192.168.10.100:6443/api/v1/namespaces/kube-public/configmaps/cluster-info | jq
        {
          "kind": "ConfigMap",
          "apiVersion": "v1",
          "metadata": {
            "name": "cluster-info",
            "namespace": "kube-public",
            "uid": "4f8df4c9-5dc4-4734-b1b7-e5a803feab7f",
    • k8s-w1/w2 정보확인

        (⎈|HomeLab:default) root@k8s-ctr:~# kubectl get node -owide
      
        NAME      STATUS   ROLES           AGE   VERSION    INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                        KERNEL-VERSION                 CONTAINER-RUNTIME
        k8s-ctr   Ready    control-plane   2d    v1.32.11   192.168.10.100   <none>        Rocky Linux 10.0 (Red Quartz)   6.12.0-55.39.1.el10_0.x86_64   containerd://2.1.5
        k8s-w1    Ready    <none>          58s   v1.32.11   192.168.10.101   <none>        Rocky Linux 10.0 (Red Quartz)   6.12.0-55.39.1.el10_0.x86_64   containerd://2.1.5
        k8s-w2    Ready    <none>          53s   v1.32.11   192.168.10.102   <none>        Rocky Linux 10.0 (Red Quartz)   6.12.0-55.39.1.el10_0.x86_64   containerd://2.1.5
      
        (⎈|HomeLab:default) root@k8s-ctr:~# kc describe node k8s-w2
        Name:               k8s-w2
        Roles:              <none>
        Labels:             beta.kubernetes.io/arch=amd64
                            beta.kubernetes.io/os=linux
                            kubernetes.io/arch=amd64
                            kubernetes.io/hostname=k8s-w2
                            kubernetes.io/os=linux
        Annotations:        flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"d2:83:ae:e6:6e:a0"}
                            flannel.alpha.coreos.com/backend-type: vxlan
                            flannel.alpha.coreos.com/kube-subnet-manager: true
                            flannel.alpha.coreos.com/public-ip: 192.168.10.102
                            kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
                            node.alpha.kubernetes.io/ttl: 0
                            volumes.kubernetes.io/controller-managed-attach-detach: true
        CreationTimestamp:  Fri, 23 Jan 2026 23:56:42 +0900
        Taints:             <none>
  2. 모니터링 툴 설치진행

    • metric-server설치진행

        (⎈|HomeLab:default) root@k8s-ctr:~# helm repo add metrics-server https://kubernetes-sigs.github.io/metrics-server/
        "metrics-server" has been added to your repositories
        (⎈|HomeLab:default) root@k8s-ctr:~# helm upgrade --install metrics-server metrics-server/metrics-server --set 'args[0]=--kubelet-insecure-tls' -n kube-system
        Release "metrics-server" does not exist. Installing it now.
        NAME: metrics-server
        LAST DEPLOYED: Sat Jan 24 00:14:07 2026
        NAMESPACE: kube-system
        STATUS: deployed
        REVISION: 1
        TEST SUITE: None
        NOTES:
        ***********************************************************************
        * Metrics Server                                                      *
        ***********************************************************************
          Chart version: 3.13.0
          App version:   0.8.0
          Image tag:     registry.k8s.io/metrics-server/metrics-server:v0.8.0
        ***********************************************************************
    • kube-prometheus-stack 설치

        (⎈|HomeLab:default) root@k8s-ctr:~# helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
      
        (⎈|HomeLab:default) root@k8s-ctr:~# helm list -n monitoring
        NAME                    NAMESPACE       REVISION        UPDATED                                 STATUS          CHART                           APP VERSION
        kube-prometheus-stack   monitoring      1               2026-01-24 00:14:59.530116612 +0900 KST deployed        kube-prometheus-stack-80.13.3   v0.87.1
        (⎈|HomeLab:default) root@k8s-ctr:~# kubectl get pod,svc,ingress,pvc -n monitoring
        NAME                                                            READY   STATUS              RESTARTS   AGE
        pod/kube-prometheus-stack-grafana-5cb7c586f9-7ntdf              0/3     ContainerCreating   0          18s
        pod/kube-prometheus-stack-kube-state-metrics-7846957b5b-gjccp   0/1     Running             0          18s
        pod/kube-prometheus-stack-operator-584f446c98-nsm8c             0/1     ContainerCreating   0          18s
        pod/kube-prometheus-stack-prometheus-node-exporter-p7j45        1/1     Running             0          18s
        pod/kube-prometheus-stack-prometheus-node-exporter-slqhj        1/1     Running             0          18s
      
        NAME                                                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                         AGE
        service/kube-prometheus-stack-alertmanager               ClusterIP   10.96.42.69     <none>        9093/TCP,8080/TCP               18s
        service/kube-prometheus-stack-grafana                    NodePort    10.96.172.144   <none>        80:30002/TCP                    18s
        service/kube-prometheus-stack-kube-state-metrics         ClusterIP   10.96.34.132    <none>        8080/TCP                        18s
        service/kube-prometheus-stack-operator                   ClusterIP   10.96.116.217   <none>        443/TCP                         18s
        service/kube-prometheus-stack-prometheus                 NodePort    10.96.30.242    <none>        9090:30001/TCP,8080:30485/TCP   18s
        service/kube-prometheus-stack-prometheus-node-exporter   ClusterIP   10.96.83.43     <none>        9100/TCP  
      
        (⎈|HomeLab:default) root@k8s-ctr:~# kubectl exec -it -n monitoring deploy/kube-prometheus-stack-grafana -- grafana --version
        grafana version 12.3.1
        (⎈|HomeLab:default) root@k8s-ctr:~# kubectl exec -it sts/prometheus-kube-prometheus-stack-prometheus -n monitoring -c prometheus -- prometheus --version
        prometheus, version 3.9.1 (branch: HEAD, revision: 9ec59baffb547e24f1468a53eb82901e58feabd8)
          build user:       root@61c3a9212c9e
          build date:       20260107-16:08:09
          go version:       go1.25.5
          platform:         linux/amd64
          tags:             netgo,builtinassets
      

      image.png

    • k8s 대시보드 확인

      image.png

    • Certificate exporter 설치 및 화면 구성

        (⎈|HomeLab:default) root@k8s-ctr:~# cat << EOF > cert-export-values.yaml
        # -- hostPaths Exporter
        hostPathsExporter:
          hostPathVolumeType: Directory
      
          daemonSets:
            cp:
              nodeSelector:
                node-role.kubernetes.io/control-plane: ""
              tolerations:
              - effect: NoSchedule
                key: node-role.kubernetes.io/control-plane
                operator: Exists
      
        (⎈|HomeLab:default) root@k8s-ctr:~# helm install x509-certificate-exporter enix/x509-certificate-exporter -n monitoring --values cert-export-values.yaml
        NAME: x509-certificate-exporter
        LAST DEPLOYED: Sat Jan 24 00:34:37 2026
        NAMESPACE: monitoring
        STATUS: deployed
        REVISION: 1
        TEST SUITE: None
        (⎈|HomeLab:default) root@k8s-ctr:~# helm list -n monitoring
        NAME                            NAMESPACE       REVISION        UPDATED                                 STATUS         CHART                            APP VERSION
        kube-prometheus-stack           monitoring      1               2026-01-24 00:14:59.530116612 +0900 KST deployed       kube-prometheus-stack-80.13.3    v0.87.1
        x509-certificate-exporter       monitoring      1               2026-01-24 00:34:37.079222386 +0900 KST deployed       x509-certificate-exporter-3.19.1 3.19.1
      
        (⎈|HomeLab:default) root@k8s-ctr:~# curl -s 10.244.0.4:9793/metrics | grep '^x509' | head -n 3
        x509_cert_expired{filename="apiserver-etcd-client.crt",filepath="/etc/kubernetes/pki/apiserver-etcd-client.crt",issuer_CN="etcd-ca",serial_number="5085519134918927718",subject_CN="kube-apiserver-etcd-client"} 0
        x509_cert_expired{filename="apiserver.crt",filepath="/etc/kubernetes/pki/apiserver.crt",issuer_CN="kubernetes",serial_number="8664196532623716359",subject_CN="kube-apiserver"} 0
        x509_cert_expired{filename="ca.crt",filepath="/etc/kubernetes/pki/ca.crt",issuer_CN="kubernetes",serial_number="303979118069449790",subject_CN="kubernetes"} 0
      

      image.png

      image.png

  3. 인증서 갱신

     (⎈|HomeLab:default) root@k8s-ctr:~# **kc describe cm -n kube-system kubeadm-config | grep -i cert**
     *caCertificateValidityPeriod: 87600h0m0s
     certificateValidityPeriod: 8760h0m0s*
    
     (⎈|HomeLab:default) root@k8s-ctr:~# **kubeadm certs check-expiration -v 6**
    
     **cat /etc/kubernetes/pki/apiserver.crt | openssl x509 -text -noout**
     *Certificate:
         Data:
             Version: 3 (0x2)
             Serial Number: 9019049356910942135 (0x7d2a199aea6457b7)
             Signature Algorithm: sha256WithRSAEncryption
             Issuer: CN=kubernetes
             **Validity**
                 Not Before: Jan 24 00:18:08 2026 GMT
                 **Not After : Jan 24 00:23:08 2027 GMT**
             Subject: CN=kube-apiserver*
     ====================중략===============================

K8S Upgrade by kubeadm

  • 쿠버네티스는 1년에 3개의 마이너 버전 출시하며 → 최근 3개 버전 패치를 지원 해준다.

버젼 관련 정보

  1. HA 클러스터에서는 가장 낮은 kube-apiserver 버전이 모든 기준
  2. kube-apiserver(HA)는 N / N-1 까지만 가능하며 업그레이드는 apiserver부터 진행
  3. kubelet / kube-proxy는 apiserver보다 신버젼 불가, 최대 3 마이너 OLD 허용
  4. kcm·scheduler·ccm은 apiserver보다 신버젼 불가 1 마이너 OLD만 허용
  5. kubectl 은 apiserver 기준 ±1 마이너 버전까지 허용
실습환경 배포 
C:\Users\bom\Desktop\스터디\upgrade_week3>vagrant up
Bringing machine 'k8s-ctr' up with 'virtualbox' provider...
Bringing machine 'k8s-w1' up with 'virtualbox' provider...
Bringing machine 'k8s-w2' up with 'virtualbox' provider...
  1. 사전준비

    • kube-prometheus-stack 설치

        (⎈|HomeLab:N/A) root@k8s-ctr:~#kubectl exec -it sts/prometheus-kube-prometheus-stack-prometheus -n monitoring -c prometheus -- **prometheus --version**
        *prometheus, version 3.9.1*
      
        ****(⎈|HomeLab:N/A) root@k8s-ctr:~#kubectl exec -it -n monitoring deploy/kube-prometheus-stack-grafana -- **grafana --version**
        *grafana version 12.3.1*
    • etcd백업

        ##etcd백업
        (⎈|HomeLab:N/A) root@k8s-ctr:~# crictl images | grep etcd
        registry.k8s.io/etcd                      3.5.24-0            8cb12dd0c3e42       23.7MB
        (⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -n kube-system etcd-k8s-ctr -- etcdctl version
        etcdctl version: 3.5.24
        API version: 3.5
        (⎈|HomeLab:N/A) root@k8s-ctr:~# ETCD_VER=3.5.24
        (⎈|HomeLab:N/A) root@k8s-ctr:~# ARCH=amd64
        (⎈|HomeLab:N/A) root@k8s-ctr:~#
        (⎈|HomeLab:N/A) root@k8s-ctr:~# curl -L https://github.com/etcd-io/etcd/releases/download/v${ETCD_VER}/etcd-v${ETCD_VER}-linux-${ARCH}.tar.gz -o /tmp/etcd-v${ETCD_VER}.tar.gz
          % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                         Dload  Upload   Total   Spent    Left  Speed
          0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
        100 21.3M  100 21.3M    0     0  17.4M      0  0:00:01  0:00:01 --:--:-- 17.4M
        (⎈|HomeLab:N/A) root@k8s-ctr:~# mkdir -p /tmp/etcd-download
        (⎈|HomeLab:N/A) root@k8s-ctr:~# tar xzvf /tmp/etcd-v${ETCD_VER}.tar.gz -C /tmp/etcd-download --strip-components=1
        etcd-v3.5.24-linux-amd64/Documentation/
        etcd-v3.5.24-linux-amd64/Documentation/dev-guide/
        etcd-v3.5.24-linux-amd64/Documentation/dev-guide/apispec/
        etcd-v3.5.24-linux-amd64/Documentation/dev-guide/apispec/swagger/
        etcd-v3.5.24-linux-amd64/Documentation/dev-guide/apispec/swagger/v3lock.swagger.json
        etcd-v3.5.24-linux-amd64/Documentation/dev-guide/apispec/swagger/v3election.swagger.json
        etcd-v3.5.24-linux-amd64/Documentation/dev-guide/apispec/swagger/rpc.swagger.json
        etcd-v3.5.24-linux-amd64/Documentation/README.md
        etcd-v3.5.24-linux-amd64/README-etcdutl.md
        etcd-v3.5.24-linux-amd64/READMEv2-etcdctl.md
        etcd-v3.5.24-linux-amd64/README-etcdctl.md
        etcd-v3.5.24-linux-amd64/README.md
        etcd-v3.5.24-linux-amd64/etcdutl
        etcd-v3.5.24-linux-amd64/etcdctl
        etcd-v3.5.24-linux-amd64/etcd
        (⎈|HomeLab:N/A) root@k8s-ctr:~# mv /tmp/etcd-download/etcdctl /usr/local/bin/
        mv /tmp/etcd-download/etcdutl /usr/local/bin/
        chown root:root /usr/local/bin/etcdctl
        chown root:root /usr/local/bin/etcdutl
        (⎈|HomeLab:N/A) root@k8s-ctr:~# etcdctl version
        etcdctl version: 3.5.24
        API version: 3.5
      
        (⎈|HomeLab:N/A) root@k8s-ctr:~# etcdctl snapshot save /backup/etcd-snapshot-$(date +%F).db
        {"level":"info","ts":"2026-01-24T01:04:27.603103+0900","caller":"snapshot/v3_snapshot.go:65","msg":"created temporary db file","path":"/backup/etcd-snapshot-2026-01-24.db.part"}
  2. Flannel CNI업그레이드

    • 이미지 다운로드

        ⎈|HomeLab:N/A) root@k8s-ctr:~# crictl images | grep flannel
        ghcr.io/flannel-io/flannel-cni-plugin     v1.7.1-flannel1     cca2af40a4a9e       4.88MB
        ghcr.io/flannel-io/flannel                v0.27.3             5de71980e553f       34MB
        ghcr.io/flannel-io/flannel                v0.27.4             e83704a177312       34.1MB
    • Helm업그레이드

      
        (⎈|HomeLab:N/A) root@k8s-ctr:~# cat << EOF > flannel.yaml
        podCidr: "10.244.0.0/16"
        flannel:
          cniBinDir: "/opt/cni/bin"
          cniConfDir: "/etc/cni/net.d"
          args:
          - "--ip-masq"
          - "--kube-subnet-mgr"
          - "--iface=enp0s9"
          backend: "vxlan"
        image:
          tag: v0.27.4
        EOF
      
        (⎈|HomeLab:N/A) root@k8s-ctr:~# helm upgrade flannel flannel/flannel -n kube-flannel -f flannel.yaml --version 0.27.4
        Release "flannel" has been upgraded. Happy Helming!
        NAME: flannel
        LAST DEPLOYED: Sat Jan 24 01:07:43 2026
        NAMESPACE: kube-flannel
        STATUS: deployed
        REVISION: 2
        TEST SUITE: None
  3. Rockey Linux OS 마이너버젼 업그레이드

    • 업그레이드

        (⎈|HomeLab:N/A) root@k8s-ctr:~# rpm -q containerd.io
        containerd.io-2.1.5-1.el10.aarch64
      
        (⎈|HomeLab:N/A) root@k8s-ctr:~# dnf install -y 'dnf-command(versionlock)'
      
        (⎈|HomeLab:N/A) root@k8s-ctr:~# dnf versionlock add containerd.io
        Adding versionlock on: containerd.io-0:2.1.5-1.el10.*
      
        (⎈|HomeLab:N/A) root@k8s-ctr:~# dnf versionlock list
        containerd.io-0:2.1.5-1.el10.*
        -------------------------------------------------------------
      
        (⎈|HomeLab:N/A) root@k8s-ctr:~# dnf -y update
        Running scriptlet: kernel-modules-core-6.12.0-124.27.1.el10_1.aarch64                                           584/584 
      
        (⎈|HomeLab:N/A) root@k8s-ctr:~# reboot
        (⎈|HomeLab:N/A) root@k8s-ctr:~# ping 192.168.10.100
        64 bytes from 192.168.10.100: icmp_seq=47 ttl=64 time=0.454 ms
        64 bytes from 192.168.10.100: icmp_seq=48 ttl=64 time=0.532 ms
        Request timeout for icmp_seq 49                                 # 재부팅 시점에 ping 통신 불가
        Request timeout for icmp_seq 50
      
        (⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -A -owide | grep k8s-ctr
        default        curl-pod                                                    1/1     Running   1 (2m25s ago)   61m    10.244.0.3       k8s-ctr   <none>           <none>
        kube-flannel   kube-flannel-ds-f2572                                       1/1     Running   1 (2m25s ago)   15m    192.168.10.100   k8s-ctr   <none>           <none>
        kube-system    coredns-668d6bf9bc-ctkmb                                    1/1     Running   1 (2m25s ago)   142m   10.244.0.2       k8s-ctr   <none>           <none>
        kube-system    etcd-k8s-ctr                                                1/1     Running   1 (2m25s ago)   142m   192.168.10.100   k8s-ctr   <none>           <none>
        kube-system    kube-apiserver-k8s-ctr                                      1/1     Running   1 (2m25s ago)   142m   192.168.10.100   k8s-ctr   <none>           <none>
        kube-system    kube-controller-manager-k8s-ctr                             1/1     Running   1 (2m25s ago)   142m   192.168.10.100   k8s-ctr   <none>           <none>
        kube-system    kube-proxy-wwd9c                                            1/1     Running   1 (2m25s ago)   142m   192.168.10.100   k8s-ctr   <none>           <none>
        kube-system    kube-scheduler-k8s-ctr                                      1/1     Running   1 (2m25s ago)   142m   192.168.10.100   k8s-ctr   <none>           <none>
        monitoring     kube-prometheus-stack-prometheus-node-exporter-6k5rk        1/1     Running   1 (2m25s ago)   36m    192.168.10.100   k8s-ctr   <none>           <none>
      
  4. kubeadm,kubelet,kubectl 업그레이드

    • 고려사항(업그레이드순서)

      • kubeadm 업그레이드
      • kubelet / kubectl 업그레이드
      • kubelet 재시작
    • containerd 관련 유의사항

      • containerd 재시작 불필요함
    • 업그레이드 진행

        (⎈|HomeLab:N/A) root@k8s-ctr:~# **dnf list --showduplicates kubeadm --disableexcludes=kubernetes**
        *Installed Packages
        kubeadm.aarch64                                       **1.32.11-150500.1.1**                                       @kubernetes
        Available Packages*
        (⎈|HomeLab:N/A) root@k8s-ctr:~# dnf install -y --disableexcludes=kubernetes kubeadm-**1.33.7-150500.1.1**
      
        (⎈|HomeLab:N/A) root@k8s-ctr:~# **kubeadm upgrade plan**
        *[upgrade/versions] Target version: **v1.33.7**
        [upgrade/versions] Latest version in the v1.32 series: **v1.32.11***
      
        - etcd schema 확인
        - kube-apiserver / controller-manager / scheduler static pod 교체
        - CoreDNS 업그레이드
        - kube-proxy 업그레이드
      
        ******(⎈|HomeLab:N/A) root@k8s-ctr:~# **kubeadm config images pull**
        *[config/images] Pulled registry.k8s.io/kube-apiserver:v1.33.7
        [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.33.7
      
        *****(⎈|HomeLab:N/A) root@k8s-ctr:~# crictl pull registry.k8s.io/kube-proxy:v1.33.7
      
        ****(⎈|HomeLab:N/A) root@k8s-ctr:~# crictl pull registry.k8s.io/coredns/coredns:v1.12.0
      
        ****(⎈|HomeLab:N/A) ****root@k8s-ctr:~# **kubeadm upgrade apply v1.33.7**
        *[upgrade] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
        [upgrade] Use 'kubeadm init phase upload-config --config your-config-file' to re-upload it.
        [upgrade/preflight] Running preflight checks
        [upgrade] Running cluster health checks
        [upgrade/preflight] You have chosen to upgrade the cluster version to "v1.33.7"
        [upgrade/versions] Cluster version: v1.32.11
        [upgrade/versions] kubeadm version: v1.33.7
        [upgrade] Are you sure you want to proceed? [y/N]: **y***
      
        (⎈|HomeLab:N/A) ****root@k8s-ctr:~# dnf install -y --disableexcludes=kubernetes kubeadm-**1.34.3-150500.1.1**
      
        (⎈|HomeLab:N/A) ****root@k8s-ctr:~# **kubeadm upgrade plan**
        *[upgrade/versions] Target version: v1.34.3
        [upgrade/versions] Latest version in the v1.33 series: v1.33.7*
      
        (⎈|HomeLab:N/A) ****root@k8s-ctr:~#**crictl pull registry.k8s.io/kube-proxy:v1.34.3**
        (⎈|HomeLab:N/A) ****root@k8s-ctr:~#**crictl pull registry.k8s.io/coredns/coredns:v1.12.1**
        (⎈|HomeLab:N/A) ****root@k8s-ctr:~#**crictl pull registry.k8s.io/pause:3.10.1**
      
        (⎈|HomeLab:N/A) ****root@k8s-ctr:~#**kubeadm upgrade apply v1.34.3 --yes**
        *[upgrade] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
        [upgrade] Use 'kubeadm init phase upload-config kubeadm --config your-config-file' to re-upload it.
        [upgrade/preflight] Running preflight checks
        [upgrade] Running cluster health checks*
      
        (⎈|HomeLab:N/A) ****root@k8s-ctr:~# dnf install -y --disableexcludes=kubernetes **kubelet-1.34.3-150500.1.1 kubectl-1.34.3-150500.1.1**
        *Upgrading:
         kubectl                  aarch64                  1.33.7-150500.1.1                    kubernetes                  9.7 M
         kubelet                  aarch64                  1.33.7-150500.1.1* 
      
         (⎈|HomeLab:N/A) ****root@k8s-ctr:~# **systemctl daemon-reload**
         (⎈|HomeLab:N/A) ****root@k8s-ctr:~# **systemctl restart kubelet**        **

'Study > K8S-Deploy' 카테고리의 다른 글

K8S ) 6주차 과제  (1) 2026.02.13
K8S ) 5주차 과제  (0) 2026.02.06
K8S) 4주차 과제  (0) 2026.01.31
K8S)2주차 과제  (0) 2026.01.15
K8S) 1주차 과제  (0) 2026.01.08

+ Recent posts