Kubeadm deep dive

  • Kubeadm 소개

    • kubeadm init

      초기 Kubernetes 컨트롤 플레인 노드를 부트스트랩(초기 구성)하는 명령이다.

    • kubeadm join

      Kubernetes 워커 노드 또는 추가 컨트롤 플레인 노드를 부트스트랩하여

      기존 Kubernetes 클러스터에 참여시키는 명령이다.

    • kubeadm upgrade

      Kubernetes 클러스터를 더 최신 버전으로 업그레이드하는 명령이다.

    • kubeadm reset

      이 호스트에서 kubeadm init 또는 kubeadm join 실행으로 인해 적용된

      모든 변경 사항을 되돌리는(초기화하는) 명령이다.

  • Kubeadm 실습 준비 ( Vagrantfile)

    
      PS C:\Users\bom\Desktop\스터디\week3> vagrant up
      Bringing machine 'k8s-ctr' up with 'virtualbox' provider...
      Bringing machine 'k8s-w1' up with 'virtualbox' provider...
      Bringing machine 'k8s-w2' up with 'virtualbox' provider...
      ==> k8s-ctr: Box 'bento/rockylinux-10.0' could not be found. Attempting to find and install...
          k8s-ctr: Box Provider: virtualbox
          k8s-ctr: Box Version: 202510.26.0
    
          #########중략#############
    
          k8s-w2: Inserting generated public key within guest...
          k8s-w2: Removing insecure key from the guest if it's present...
          k8s-w2: Key inserted! Disconnecting and reconnecting using new SSH key...
      ==> k8s-w2: Machine booted and ready!
      ==> k8s-w2: Checking for guest additions in VM...
      ==> k8s-w2: Setting hostname...
      ==> k8s-w2: Configuring and enabling network interfaces...
  1. 공통사전설정

    • 기본정보확인

        PS C:\Users\bom\Desktop\스터디\week3> vagrant ssh k8s-ctr
      
        This system is built by the Bento project by Chef Software
        More information can be found at https://github.com/chef/bento
      
        Use of this system is acceptance of the OS vendor EULA and License Agreements.
        vagrant@k8s-ctr:~$
      
        vagrant@k8s-ctr:~$ whoami
        vagrant
        vagrant@k8s-ctr:~$ id
        uid=1000(vagrant) gid=1000(vagrant) groups=1000(vagrant) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
        vagrant@k8s-ctr:~$ pwd
        /home/vagrant
      
        vagrant@k8s-ctr:~$ rpm -aq | grep release
        rocky-release-10.0-1.6.el10.noarch
      
    • Time NTP 설정 : 인증서만료시간 , 로그 타임스탬프 등 모든 노드에 동기화된 시간이 필요 하다.

        root@k8s-ctr:~# timedatectl status
                       Local time: Wed 2026-01-21 14:11:22 UTC
                   Universal time: Wed 2026-01-21 14:11:22 UTC
                         RTC time: Wed 2026-01-21 14:11:21
                        Time zone: UTC (UTC, +0000)
        System clock synchronized: yes
                      NTP service: active
                  RTC in local TZ: yes
      
        root@k8s-ctr:~# timedatectl set-timezone Asia/Seoul
        root@k8s-ctr:~# date
        Wed Jan 21 11:11:44 PM KST 2026
      
        root@k8s-ctr:~# chronyc sources -v
      
          .-- Source mode  '^' = server, '=' = peer, '#' = local clock.
         / .- Source state '*' = current best, '+' = combined, '-' = not combined,
        | /             'x' = may be in error, '~' = too variable, '?' = unusable.
        ||                                                 .- xxxx [ yyyy ] +/- zzzz
        ||      Reachability register (octal) -.           |  xxxx = adjusted offset,
        ||      Log2(Polling interval) --.      |          |  yyyy = measured offset,
        ||                                \     |          |  zzzz = estimated error.
        ||                                 |    |           \
        MS Name/IP address         Stratum Poll Reach LastRx Last sample
        ===============================================================================
        ^* time.ravnus.com               2   6   377    34    +85us[ +280us] +/- 2917us
        ^+ ec2-3-39-176-65.ap-north>     2   6   377    36   +383us[ +576us] +/- 5118us
        ^- 121.174.142.82                3   6   377    34  +1248us[+1248us] +/-   32ms
        ^- ipv4.ntp3.rbauman.com         2   6   377    56  +1540us[+1726us] +/-   18ms
      
    • Selinux, firewalld 종료

        root@k8s-ctr:~# getenforce
        Enforcing
        root@k8s-ctr:~# sestatus
        SELinux status:                 enabled
        SELinuxfs mount:                /sys/fs/selinux
        SELinux root directory:         /etc/selinux
        Loaded policy name:             targeted
        Current mode:                   enforcing
        Mode from config file:          enforcing
        Policy MLS status:              enabled
        Policy deny_unknown status:     allowed
        Memory protection checking:     actual (secure)
        Max kernel policy version:      33
        root@k8s-ctr:~# sestatus ^C
        root@k8s-ctr:~# setenforce 0
        root@k8s-ctr:~# sealert ^C
        root@k8s-ctr:~# sestatus
        SELinux status:                 enabled
        SELinuxfs mount:                /sys/fs/selinux
        SELinux root directory:         /etc/selinux
        Loaded policy name:             targeted
        Current mode:                   permissive
        Mode from config file:          enforcing
        Policy MLS status:              enabled
        Policy deny_unknown status:     allowed
        Memory protection checking:     actual (secure)
        Max kernel policy version:      33
        root@k8s-ctr:~# cat /etc/selinux/config | grep ^SELINUX
        SELINUX=enforcing
        SELINUXTYPE=targeted
      
        root@k8s-ctr:~# sed -i 's/^SELINUX=enforcing/SELINUX=permissive/' /etc/selinux/config
        cat /etc/selinux/config | grep ^SELINUX
        SELINUX=permissive
        SELINUXTYPE=targeted
      
        root@k8s-ctr:~# systemctl status firewalld
        ○ firewalld.service - firewalld - dynamic firewall daemon
             Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; preset: enabled)
             Active: inactive (dead)
               Docs: man:firewalld(1)
      
        Jan 21 23:02:55 localhost systemd[1]: Starting firewalld.service - firewalld - dynamic firewall daemon...
        Jan 21 23:02:56 localhost systemd[1]: Started firewalld.service - firewalld - dynamic firewall daemon.
        Jan 21 23:14:39 k8s-ctr systemd[1]: Stopping firewalld.service - firewalld - dynamic firewall daemon...
        Jan 21 23:14:39 k8s-ctr systemd[1]: firewalld.service: Deactivated successfully.
        Jan 21 23:14:39 k8s-ctr systemd[1]: Stopped firewalld.service - firewalld - dynamic firewall daemon.
        Jan 21 23:14:39 k8s-ctr systemd[1]: firewalld.service: Consumed 799ms CPU time, 69.6M memory peak.
    • Swap비활성화

        root@k8s-ctr:~# lsblk
        NAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
        sda      8:0    0  64G  0 disk
        ├─sda1   8:1    0   1M  0 part
        ├─sda2   8:2    0   3G  0 part [SWAP]
        └─sda3   8:3    0  61G  0 part /
        root@k8s-ctr:~# swapoff -a
        root@k8s-ctr:~# lsblk
        NAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
        sda      8:0    0  64G  0 disk
        ├─sda1   8:1    0   1M  0 part
        ├─sda2   8:2    0   3G  0 part
        └─sda3   8:3    0  61G  0 part /
    • 커널 모듈 및 커널 파라미터 설정

        root@k8s-ctr:~# lsmod | grep -iE 'overlay|br_netfilter'
        root@k8s-ctr:~# modprobe overlay
        modprobe br_netfilter
        root@k8s-ctr:~# lsmod | grep -iE 'overlay|br_netfilter'
        br_netfilter           36864  0
        bridge                417792  1 br_netfilter
        overlay               245760  0
      
        root@k8s-ctr:~# cat <<EOF | tee /etc/sysctl.d/k8s.conf
        net.bridge.bridge-nf-call-iptables  = 1
        net.bridge.bridge-nf-call-ip6tables = 1
        net.ipv4.ip_forward                 = 1
        EOF
        net.bridge.bridge-nf-call-iptables  = 1
        net.bridge.bridge-nf-call-ip6tables = 1
        net.ipv4.ip_forward                 = 1
        root@k8s-ctr:~# tree /etc/sysctl.d/
        /etc/sysctl.d/
        ├── 99-sysctl.conf -> ../sysctl.conf
        └── k8s.conf
      
        1 directory, 2 files
        root@k8s-ctr:~# sysctl --system
        * Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
        * Applying /usr/lib/sysctl.d/10-map-count.conf ...
        * Applying /usr/lib/sysctl.d/50-coredump.conf ...
        * Applying /usr/lib/sysctl.d/50-default.conf ...
        * Applying /usr/lib/sysctl.d/50-libkcapi-optmem_max.conf ...
        * Applying /usr/lib/sysctl.d/50-pid-max.conf ...
        * Applying /usr/lib/sysctl.d/50-redhat.conf ...
        * Applying /etc/sysctl.d/99-sysctl.conf ...
        * Applying /etc/sysctl.d/k8s.conf ...
    • hosts 설정

        root@k8s-ctr:~# cat /etc/hosts
        # Loopback entries; do not change.
        # For historical reasons, localhost precedes localhost.localdomain:
        127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
        ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
        # See hosts(5) for proper format and other examples:
        # 192.168.1.10 foo.example.org foo
        # 192.168.1.13 bar.example.org bar
        192.168.10.100 k8s-ctr
        192.168.10.101 k8s-w1
        192.168.10.102 k8s-w2
  1. 공통CRI설정(Contrainerd)

    • contrainerd(runc)설치 v2.1.5

      원활한 실습을 위해 containerd 버젼은 2.1.5버젼으로 진행하도록한다.

      image.png

        root@k8s-ctr:~# dnf repolist
        repo id                                             repo name
        appstream                                           Rocky Linux 10 - AppStream
        baseos                                              Rocky Linux 10 - BaseOS
        extras                                              Rocky Linux 10 - Extras
      
        root@k8s-ctr:~# dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
        Adding repo from: https://download.docker.com/linux/centos/docker-ce.repo
        root@k8s-ctr:~# dnf repolist
        repo id                                                repo name
        appstream                                              Rocky Linux 10 - AppStream
        baseos                                                 Rocky Linux 10 - BaseOS
        docker-ce-stable                                       Docker CE Stable - x86_64
        extras                                                 Rocky Linux 10 - Extras
        root@k8s-ctr:~# dnf makecache
        Docker CE Stable - x86_64                                                               188 kB/s |  16 kB     00:00
        Rocky Linux 10 - BaseOS                                                                 8.2 MB/s | 7.6 MB     00:00
        Rocky Linux 10 - AppStream                                                              3.0 MB/s | 2.1 MB     00:00
        Rocky Linux 10 - Extras                                                                  11 kB/s | 5.9 kB     00:00
        Metadata cache created.
      
        root@k8s-ctr:~# dnf list --showduplicates containerd.io
        Last metadata expiration check: 0:00:15 ago on Wed 21 Jan 2026 11:30:36 PM KST.
        Available Packages
        containerd.io.x86_64                                  1.7.23-3.1.el10                                   docker-ce-stable
        containerd.io.x86_64                                  1.7.24-3.1.el10                                   docker-ce-stable
        containerd.io.x86_64                                  1.7.25-3.1.el10                                   docker-ce-stable
        containerd.io.x86_64                                  1.7.26-3.1.el10                                   docker-ce-stable
        containerd.io.x86_64                                  1.7.27-3.1.el10                                   docker-ce-stable
        containerd.io.x86_64                                  1.7.28-1.el10                                     docker-ce-stable
        containerd.io.x86_64                                  1.7.28-2.el10                                     docker-ce-stable
        containerd.io.x86_64                                  1.7.29-1.el10                                     docker-ce-stable
        containerd.io.x86_64                                  2.1.5-1.el10                                      docker-ce-stable
        containerd.io.x86_64                                  2.2.0-2.el10                                      docker-ce-stable
        containerd.io.x86_64                                  2.2.1-1.el10
      
        root@k8s-ctr:~# dnf install -y containerd.io-2.1.5-1.el10
        Last metadata expiration check: 0:00:28 ago on Wed 21 Jan 2026 11:30:36 PM KST.
        Dependencies resolved.
        ========================================================================================================================
         Package                      Architecture          Version                       Repository                       Size
        ========================================================================================================================
        Installing:
         containerd.io                x86_64                2.1.5-1.el10                  docker-ce-stable                 34 M
      
        Transaction Summary
        ========================================================================================================================
        Install  1 Package
      
        ######################중략################################
      
        root@k8s-ctr:~# which runc && runc --version
        /usr/bin/runc
        runc version 1.3.3
        commit: v1.3.3-0-gd842d771
        spec: 1.2.1
        go: go1.24.9
        libseccomp: 2.5.3
      
        root@k8s-ctr:~# containerd config default | tee /etc/containerd/config.toml
        version = 3
        root = '/var/lib/containerd'
        state = '/run/containerd'
        temp = ''
        disabled_plugins = []
        required_plugins = []
        oom_score = 0
        imports = []
      
        root@k8s-ctr:~# sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
        root@k8s-ctr:~# cat /etc/containerd/config.toml | grep -i systemdcgroup
                    SystemdCgroup = true
      
      
    root@k8s-ctr:~# systemctl daemon-reload
    root@k8s-ctr:~# systemctl enable --now containerd
    Created symlink '/etc/systemd/system/multi-user.target.wants/containerd.service' → '/usr/lib/systemd/system/containerd.service'.
    root@k8s-ctr:~# systemctl status containerd --no-pager
    ● containerd.service - containerd container runtime
         Loaded: loaded (/usr/lib/systemd/system/containerd.service; enabled; preset: disabled)
         Active: active (running) since Wed 2026-01-21 23:33:28 KST; 5s ago
     Invocation: ef390d0ab56144b19688127d98415f72
           Docs: https://containerd.io

     root@k8s-ctr:~# containerd config dump | grep -n containerd.sock
    11:  address = '/run/containerd/containerd.sock'
    root@k8s-ctr:~# ss -xl | grep containerd
    u_str LISTEN 0      4096        /run/containerd/containerd.sock.ttrpc 20071            * 0
    u_str LISTEN 0      4096              /run/containerd/containerd.sock 20072            * 0
    root@k8s-ctr:~# ss -xnp | grep containerd
    u_str ESTAB 0      0                                                 * 20977            * 20069 users:(("containerd",pid=5439,fd=2),("containerd",pid=5439,fd=1))                                                                                                                                                                                                       
    root@k8s-ctr:~# ctr --address /run/containerd/containerd.sock version
    Client:
      Version:  v2.1.5
      Revision: fcd43222d6b07379a4be9786bda52438f0dd16a1
      Go version: go1.24.9

    Server:
      Version:  v2.1.5
      Revision: fcd43222d6b07379a4be9786bda52438f0dd16a1
      UUID: c0182a7f-b72c-4269-ba99-f2cf345cdfdc
    ```
  1. 공통 kubeadm,kublet kubectl 설치 v1.32.11

    • kubeadm,kublet kubectl

        root@k8s-ctr:~# cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
        [kubernetes]
        name=Kubernetes
        baseurl=https://pkgs.k8s.io/core:/stable:/v1.32/rpm/
        enabled=1
        gpgcheck=1
        gpgkey=https://pkgs.k8s.io/core:/stable:/v1.32/rpm/repodata/repomd.xml.key
        exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
        EOF
        [kubernetes]
        name=Kubernetes
        baseurl=https://pkgs.k8s.io/core:/stable:/v1.32/rpm/
        enabled=1
        gpgcheck=1
        gpgkey=https://pkgs.k8s.io/core:/stable:/v1.32/rpm/repodata/repomd.xml.key
        exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
      
        root@k8s-ctr:~# dnf install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
        Last metadata expiration check: 0:00:05 ago on Wed 21 Jan 2026 11:44:29 PM KST.
        Dependencies resolved.
        ========================================================================================================================
         Package                       Architecture          Version                            Repository                 Size
        ========================================================================================================================
        Installing:
         kubeadm                       x86_64                1.32.11-150500.1.1                 kubernetes                 12 M
         kubectl                       x86_64                1.32.11-150500.1.1                 kubernetes                 11 M
         kubelet                       x86_64                1.32.11-150500.1.1                 kubernetes                 15 M
        Installing dependencies:
         cri-tools                     x86_64                1.32.0-150500.1.1                  kubernetes                7.1 M
         kubernetes-cni                x86_64                1.6.0-150500.1.1                   kubernetes                8.0 M
      
        Transaction Summary
        ========================================================================================================================
        Install  5 Packages
      
        ################중략###########################
      
        root@k8s-ctr:~# systemctl enable --now kubelet
        Created symlink '/etc/systemd/system/multi-user.target.wants/kubelet.service' → '/usr/lib/systemd/system/kubelet.service'.
        root@k8s-ctr:~# systemctl status kubelet
        ● kubelet.service - kubelet: The Kubernetes Node Agent
             Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; preset: disabled)
            Drop-In: /usr/lib/systemd/system/kubelet.service.d
                     └─10-kubeadm.conf
             Active: activating (auto-restart) (Result: exit-code) since Wed 2026-01-21 23:45:05 KST; 4s ago
         Invocation: 9290c9551a364306bdd2d324aca03c40
               Docs: https://kubernetes.io/docs/
            Process: 5703 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBEL>
           Main PID: 5703 (code=exited, status=1/FAILURE)
           Mem peak: 11.7M
                CPU: 79ms
      
        root@k8s-ctr:~# crictl info | jq
        {
          "cniconfig": {
            "Networks": [
              {
                "Config": {
                  "CNIVersion": "0.3.1",
                  "Name": "cni-loopback",
                  "Plugins": [
                    {
                      "Network": {
                        "ipam": {},
                        "type": "loopback"
                      },
                      "Source": "{\"type\":\"loopback\"}"
                    }
                  ],
                  "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
      
        ##############중략############################
      
        root@k8s-ctr:~# systemctl is-active kubelet
        activating
        root@k8s-ctr:~# systemctl status kubelet --no-pager
        ● kubelet.service - kubelet: The Kubernetes Node Agent
             Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; preset: disabled)
            Drop-In: /usr/lib/systemd/system/kubelet.service.d
                     └─10-kubeadm.conf
             Active: activating (auto-restart) (Result: exit-code) since Wed 2026-01-21 23:46:17 KST; 3s ago
         Invocation: 7ad519af585240e48db775d8ae3a190d
               Docs: https://kubernetes.io/docs/
            Process: 5778 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
           Main PID: 5778 (code=exited, status=1/FAILURE)
           Mem peak: 13.4M
                CPU: 77ms
        root@k8s-ctr:~# journalctl -u kubelet --no-pager
        Jan 21 23:45:05 k8s-ctr systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
        Jan 21 23:45:05 k8s-ctr (kubelet)[5703]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_KUBEADM_ARGS
        Jan 21 23:45:05 k8s-ctr kubelet[5703]: E0121 23:45:05.839407    5703 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
      
        root@k8s-ctr:~# ls -l /run/containerd/containerd.sock
        srw-rw----. 1 root root 0 Jan 21 23:33 /run/containerd/containerd.sock
        root@k8s-ctr:~# ss -xl | grep containerd
        u_str LISTEN 0      4096        /run/containerd/containerd.sock.ttrpc 20071            * 0
        u_str LISTEN 0      4096              /run/containerd/containerd.sock 20072            * 0
        root@k8s-ctr:~# ss -xnp | grep containerd
        u_str ESTAB 0      0                                                 * 20977            * 20069 users:(("containerd",pid=5439,fd=2),("containerd",pid=5439,fd=1))  
  2. kubeadm으로 k8s 클러스터 구성 편의성설치 (중요)

    • kubeadm init 수행

      • 사전 검사 수행: CRI 연결, root 권한, kubelet 최소 버전 충족 여부 확인

      • 보안 구성 생성: Control Plane 통신을 위한 인증서와 키를 /etc/kubernetes/pki에 생성

      • kubeconfig 생성: kubelet, controller-manager, scheduler, admin용 설정 파일 생성

      • Control Plane 구성요소 배포: kube-apiserver, controller-manager, scheduler, etcd를 Static Pod로 생성

      • kubelet 기동 및 대기: kubelet을 시작하고 API Server가 정상 상태가 될 때까지 대기

      • 클러스터 설정 저장: kubeadm ClusterConfiguration을 kubeadm-config ConfigMap에 저장

      • Control Plane 노드 지정: control-plane 라벨 부여 및 NoSchedule taint 적용

      • 부트스트랩 설정: bootstrap 토큰 생성 및 노드 조인을 위한 TLS/RBAC/cluster-info 구성

      • 필수 애드온 설치: kube-proxy(DaemonSet)와 CoreDNS 설치

        ```bash
        root@k8s-ctr:~# cat kubeadm-init.yaml
        apiVersion: kubeadm.k8s.io/v1beta4
        kind: InitConfiguration
        bootstrapTokens:

      • token: "123456.1234567890123456"
        ttl: "0s"
        usages:

        • signing
        • authentication
          nodeRegistration:
          kubeletExtraArgs:
          • name: node-ip
            value: "192.168.10.100" # 미설정 시 10.0.2.15 맵핑
            criSocket: "unix:///run/containerd/containerd.sock"
            localAPIEndpoint:
            advertiseAddress: "192.168.10.100"

      apiVersion: kubeadm.k8s.io/v1beta4
      kind: ClusterConfiguration
      kubernetesVersion: "1.32.11"
      networking:

      podSubnet: "10.244.0.0/16"
      serviceSubnet: "10.96.0.0/16"

      root@k8s-ctr:~# kubeadm init --config="kubeadm-init.yaml"
      [init] Using Kubernetes version: v1.32.11
      [preflight] Running pre-flight checks
      [preflight] Pulling images required for setting up a Kubernetes cluster
      [preflight] This might take a minute or two, depending on the speed of your internet connection
      [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
      W0121 23:56:45.057914 6337 checks.go:843] detected that the sandbox image "" of the container runtime is inconsistent with that used by kubeadm.It is recommended to use "registry.k8s.io/pause:3.10" as the CRI sandbox image.

      root@k8s-ctr:# mkdir -p /root/.kube
      root@k8s-ctr:
      # cp -i /etc/kubernetes/admin.conf /root/.kube/config
      root@k8s-ctr:# chown $(id -u):$(id -g) /root/.kube/config
      root@k8s-ctr:
      # kubectl cluster-info
      Kubernetes control plane is running at https://192.168.10.100:6443
      CoreDNS is running at https://192.168.10.100:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

      To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
      root@k8s-ctr:# kubectl get node -owide
      NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
      k8s-ctr NotReady control-plane 3m6s v1.32.11 192.168.10.100 Rocky Linux 10.0 (Red Quartz) 6.12.0-55.39.1.el10_0.x86_64 containerd://2.1.5
      root@k8s-ctr:
      # kubectl get pod -n kube-system -owide
      NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
      coredns-668d6bf9bc-n2xwv 0/1 Pending 0 3m3s
      coredns-668d6bf9bc-xsk2k 0/1 Pending 0 3m3s
      etcd-k8s-ctr 1/1 Running 0 3m9s 192.168.10.100 k8s-ctr
      kube-apiserver-k8s-ctr 1/1 Running 0 3m9s 192.168.10.100 k8s-ctr
      kube-controller-manager-k8s-ctr 1/1 Running 0 3m9s 192.168.10.100 k8s-ctr
      kube-proxy-9dpcs 1/1 Running 0 3m3s 192.168.10.100 k8s-ctr
      kube-scheduler-k8s-ctr 1/1 Running 0 3m9s 192.168.10.100 k8s-ctr

      root@k8s-ctr:# kubectl -n kube-public get configmap cluster-info
      NAME DATA AGE
      cluster-info 2 3m58s
      root@k8s-ctr:
      # kubectl -n kube-public get configmap cluster-info -o yaml
      apiVersion: v1
      data:

      jws-kubeconfig-123456: eyJhbGciOiJIUzI1NiIsImtpZCI6IjEyMzQ1NiJ9..h64eqq42z6muTTM3tEU5EEZaBcK8--j1gmg7rtEXyo0
      kubeconfig: |
        apiVersion: v1
        clusters:
        - cluster:
            certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJQkRmelpiZXFoRDR3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TmpBeE1qRXhORFV4TkRSYUZ3MHpOakF4TVRreE5EVTJORFJhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUURHUFR2QnhTcFpKaDJncGdqeXpzZWhxZTR0VnhRend5Q0VVNXVSbzJ1ZlI5N21GeUpLa2VaRjVnYU0Kd0dsSlFZT2JUb0Z2cWNaZk5ZTzZ4SWlTUHdsTEJlYzFCa3Z3d2FQWDhaV1BSVjJHVGRpODc1U2Q2azA5bjVtNgpHOUgrdFVvbU1xY0d3dlU2RWhrYUxqTHVta3pXb1FXT2F3YjB6Q01NTkpjT2MwQkFScGVKTUxXYUI3RzNjYUJhCnh3d3pBS25GMUtNdlVqb2w4aEpaSUM4bGVMWDNzNnpuSUpRSDREVC9DcUZEYnEyM1BqWlFqaENGTkQxeUdtSTgKMVcyYUttTFJJWUVPYzUrYXV6NlByZUZ3TjVCU1JWNllIY1phMzJlSXE0S1k1TGkxdXAxdURReWtjWGFUN1Bvbwp0azY1OFpvWG5rRUdFL3M3NXo2ZGY2cjRiVVJuQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJSb2FrV25YSHBwU0lwL05JMFE4eTJFOEJDQlpUQVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQ3p1ZFhjcU5FcApjSVI1YnJ0ajk1YUhCb0RtZ0NNUjYyaGlrMWdqVUtPaTR2L2tJaWVhUjRmN3hWSGhLNi8wK1M1eXpXVDA4WTVYClluenNuTllrbXQyc2lGWTlKMFFjL1U3cVlKemhleXlsdjZuMzBWMUVDN0pHMkF3WmFUekFyMUVVUEI5TmFGOGIKTFB1c1lSbWR5dnJUMjJzTUFadlZFd0dWZTgrYVFmcWNMZ1pmcW14dHh2WFhiTnRoOFozSWs2Qk9LYVBhQWFZKwpkSHltSDJLTWEvajhobEo5VG9sZFcvQURLY083Y1ZWU0NjcUY1SUFRNldLQTNwRVMzSENnTk5aa3MyUEN2R1NLCnQvelJBd2xUV0JxUWwvSmEraCtzZDQvdW1qR2JUcTZnaEx2V0tnQ09naU9kLzRZbUhFWkVqMWJuSzlmVE9EVGwKclNGSytHckVFUXZhCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
            server: https://192.168.10.100:6443
          name: ""
        contexts: null
        current-context: ""
        kind: Config
        preferences: {}
        users: null

      kind: ConfigMap
      metadata:

      creationTimestamp: "2026-01-21T14:57:19Z"
      name: cluster-info
      namespace: kube-public
      resourceVersion: "326"
      uid: 4f8df4c9-5dc4-4734-b1b7-e5a803feab7f
      
      
      
- 작업 편의성 설정

    ```bash

    root@k8s-ctr:~# alias k=kubectl
    root@k8s-ctr:~# complete -o default -F __start_kubectl k
    root@k8s-ctr:~# echo 'alias k=kubectl' >> /etc/profile
    root@k8s-ctr:~# echo 'complete -o default -F __start_kubectl k' >> /etc/profile
    root@k8s-ctr:~# k get node
    NAME      STATUS     ROLES           AGE     VERSION
    k8s-ctr   NotReady   control-plane   6m26s   v1.32.11
    root@k8s-ctr:~# dnf install -y 'dnf-command(config-manager)'
    Last metadata expiration check: 0:19:06 ago on Wed 21 Jan 2026 11:44:43 PM KST.
    Package dnf-plugins-core-4.7.0-8.el10.noarch is already installed.
    Dependencies resolved.
    ========================================================================================================================
     Package                                 Architecture          Version                      Repository             Size
    ========================================================================================================================
    Upgrading:
     dnf-plugins-core                        noarch                4.7.0-9.el10                 baseos                 43 k
     python3-dnf-plugins-core                noarch                4.7.0-9.el10                 baseos                315 k
     yum-utils                               noarch                4.7.0-9.el10                 baseos                 34 k

    Transaction Summary
    ========================================================================================================================
    Upgrade  3 Packages

    Total download size: 392 k
    Downloading Packages:
    (1/3): dnf-plugins-core-4.7.0-9.el10.noarch.rpm                                         1.2 MB/s |  43 kB     00:00
    (2/3): yum-utils-4.7.0-9.el10.noarch.rpm                                                869 kB/s |  34 kB     00:00
    (3/3): python3-dnf-plugins-core-4.7.0-9.el10.noarch.rpm                                 6.0 MB/s | 315 kB     00:00
    ------------------------------------------------------------------------------------------------------------------------
    Total                                                                                   729 kB/s | 392 kB     00:00
    Running transaction check
    Transaction check succeeded.
    Running transaction test
    Transaction test succeeded.
    Running transaction
      Preparing        :                                                                                                1/1
      Upgrading        : python3-dnf-plugins-core-4.7.0-9.el10.noarch                                                   1/6
      Upgrading        : dnf-plugins-core-4.7.0-9.el10.noarch                                                           2/6
      Upgrading        : yum-utils-4.7.0-9.el10.noarch                                                                  3/6
      Cleanup          : yum-utils-4.7.0-8.el10.noarch                                                                  4/6
      Cleanup          : dnf-plugins-core-4.7.0-8.el10.noarch                                                           5/6
      Cleanup          : python3-dnf-plugins-core-4.7.0-8.el10.noarch                                                   6/6
      Running scriptlet: python3-dnf-plugins-core-4.7.0-8.el10.noarch                                                   6/6

    Upgraded:
      dnf-plugins-core-4.7.0-9.el10.noarch   python3-dnf-plugins-core-4.7.0-9.el10.noarch   yum-utils-4.7.0-9.el10.noarch

    Complete!
    root@k8s-ctr:~# dnf config-manager --add-repo https://kubecolor.github.io/packages/rpm/kubecolor.repo
    Adding repo from: https://kubecolor.github.io/packages/rpm/kubecolor.repo
    root@k8s-ctr:~# dnf repolist
    repo id                                                repo name
    appstream                                              Rocky Linux 10 - AppStream
    baseos                                                 Rocky Linux 10 - BaseOS
    docker-ce-stable                                       Docker CE Stable - x86_64
    extras                                                 Rocky Linux 10 - Extras
    kubecolor                                              packages for kubecolor
    kubernetes                                             Kubernetes
    root@k8s-ctr:~# dnf install -y kubecolor
    packages for kubecolor                                                                   18 kB/s | 949  B     00:00
    Dependencies resolved.
    ========================================================================================================================
     Package                      Architecture              Version                      Repository                    Size
    ========================================================================================================================
    Installing:
     kubecolor                    x86_64                    0.5.3-1                      kubecolor                    2.6 M

    Transaction Summary
    ========================================================================================================================
    Install  1 Package

    Total download size: 2.6 M
    Installed size: 5.9 M
    Downloading Packages:
    kubecolor_0.5.3_linux_amd64.rpm                                                         8.2 MB/s | 2.6 MB     00:00
    ------------------------------------------------------------------------------------------------------------------------
    Total                                                                                   8.1 MB/s | 2.6 MB     00:00
    ##########중략#######

    root@k8s-ctr:~# kubecolor get node
    NAME      STATUS     ROLES           AGE     VERSION
    k8s-ctr   NotReady   control-plane   6m44s   v1.32.11
    root@k8s-ctr:~# alias kc=kubecolor
    root@k8s-ctr:~# echo 'alias kc=kubecolor' >> /etc/profile
    root@k8s-ctr:~# kc get node
    NAME      STATUS     ROLES           AGE     VERSION
    k8s-ctr   NotReady   control-plane   6m52s   v1.32.11
    root@k8s-ctr:~# kc describe node
    Name:               k8s-ctr
    Roles:              control-plane
    Labels:             beta.kubernetes.io/arch=amd64
                        beta.kubernetes.io/os=linux
                        kubernetes.io/arch=amd64
                        kubernetes.io/hostname=k8s-ctr
                        kubernetes.io/os=linux
                        node-role.kubernetes.io/control-plane=
                        node.kubernetes.io/exclude-from-external-load-balancers=
    Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
                        node.alpha.kubernetes.io/ttl: 0
                        volumes.kubernetes.io/controller-managed-attach-detach: true
    CreationTimestamp:  Wed, 21 Jan 2026 23:57:17 +0900
    Taints:             node-role.kubernetes.io/control-plane:NoSchedule
                        node.kubernetes.io/not-ready:NoSchedule
    Unschedulable:      false
    Lease:
      HolderIdentity:  k8s-ctr
      AcquireTime:     <unset>
      RenewTime:       Thu, 22 Jan 2026 00:04:11 +0900
    Conditions:
      Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
      ----             ------  -----------------                 ------------------                ------                       -------
      MemoryPressure   False   Thu, 22 Jan 2026 00:02:29 +0900   Wed, 21 Jan 2026 23:57:15 +0900   KubeletHasSufficientMemory   kubelet has sufficient memory available
      DiskPressure     False   Thu, 22 Jan 2026 00:02:29 +0900   Wed, 21 Jan 2026 23:57:15 +0900   KubeletHasNoDiskPressure     kubelet has no disk pressure
      PIDPressure      False   Thu, 22 Jan 2026 00:02:29 +0900   Wed, 21 Jan 2026 23:57:15 +0900   KubeletHasSufficientPID      kubelet has sufficient PID available
      Ready            False   Thu, 22 Jan 2026 00:02:29 +0900   Wed, 21 Jan 2026 23:57:15 +0900   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
    Addresses:
      InternalIP:  192.168.10.100
      Hostname:    k8s-ctr
    Capacity:
      cpu:                4
      ephemeral-storage:  62374Mi
      hugepages-2Mi:      0
      memory:             3036932Ki
      pods:               110
    Allocatable:
      cpu:                4
      ephemeral-storage:  58863491385
      hugepages-2Mi:      0
      memory:             2934532Ki
      pods:               110
    System Info:
      Machine ID:                 fc9f882274fc4318b555010115a384ff
      System UUID:                f29c335e-dc4d-504d-b371-d8d01bebc7f7
      Boot ID:                    eb7b009f-47d6-4f8d-a944-97839286ddce
      Kernel Version:             6.12.0-55.39.1.el10_0.x86_64
      OS Image:                   Rocky Linux 10.0 (Red Quartz)
      Operating System:           linux
      Architecture:               amd64
      Container Runtime Version:  containerd://2.1.5
      Kubelet Version:            v1.32.11
      Kube-Proxy Version:         v1.32.11
    PodCIDR:                      10.244.0.0/24
    PodCIDRs:                     10.244.0.0/24
    Non-terminated Pods:          (5 in total)
      Namespace                   Name                               CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
      ---------                   ----                               ------------  ----------  ---------------  -------------  ---
      kube-system                 etcd-k8s-ctr                       100m (2%)     0 (0%)      100Mi (3%)       0 (0%)         6m51s
      kube-system                 kube-apiserver-k8s-ctr             250m (6%)     0 (0%)      0 (0%)           0 (0%)         6m51s
      kube-system                 kube-controller-manager-k8s-ctr    200m (5%)     0 (0%)      0 (0%)           0 (0%)         6m51s
      kube-system                 kube-proxy-9dpcs                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m45s
      kube-system                 kube-scheduler-k8s-ctr             100m (2%)     0 (0%)      0 (0%)           0 (0%)         6m51s
    Allocated resources:
      (Total limits may be over 100 percent, i.e., overcommitted.)
      Resource           Requests    Limits
      --------           --------    ------
      cpu                650m (16%)  0 (0%)
      memory             100Mi (3%)  0 (0%)
      ephemeral-storage  0 (0%)      0 (0%)
      hugepages-2Mi      0 (0%)      0 (0%)
    Events:
      Type     Reason                   Age    From             Message
      ----     ------                   ----   ----             -------
      Normal   Starting                 6m44s  kube-proxy
      Normal   Starting                 6m51s  kubelet          Starting kubelet.
      Warning  InvalidDiskCapacity      6m51s  kubelet          invalid capacity 0 on image filesystem
      Normal   NodeAllocatableEnforced  6m51s  kubelet          Updated Node Allocatable limit across pods
      Normal   NodeHasSufficientMemory  6m51s  kubelet          Node k8s-ctr status is now: NodeHasSufficientMemory
      Normal   NodeHasNoDiskPressure    6m51s  kubelet          Node k8s-ctr status is now: NodeHasNoDiskPressure
      Normal   NodeHasSufficientPID     6m51s  kubelet          Node k8s-ctr status is now: NodeHasSufficientPID
      Normal   RegisteredNode           6m46s  node-controller  Node k8s-ctr event: Registered Node k8s-ctr in Controller
    root@k8s-ctr:~# dnf install -y git
    Last metadata expiration check: 0:00:18 ago on Thu 22 Jan 2026 12:03:57 AM KST.
    Dependencies resolved.
    ========================================================================================================================
     Package                         Architecture          Version                           Repository                Size
    ========================================================================================================================
    Installing:
     git                             x86_64                2.47.3-1.el10                     appstream                 50 k
    Installing dependencies:
     git-core                        x86_64                2.47.3-1.el10                     appstream                4.8 M
     git-core-doc                    noarch                2.47.3-1.el10                     appstream                3.1 M
     perl-Error                      noarch                1:0.17029-18.el10                 appstream                 40 k
     perl-File-Find                  noarch                1.44-512.2.el10_0                 appstream                 25 k
     perl-Git                        noarch                2.47.3-1.el10                     appstream                 37 k
     perl-TermReadKey                x86_64                2.38-24.el10                      appstream                 36 k
     perl-lib                        x86_64                0.65-512.2.el10_0                 appstream                 15 k

    Transaction Summary
    ========================================================================================================================
    Install  8 Packages

    ###################중략 #############################

    root@k8s-ctr:~# git clone https://github.com/ahmetb/kubectx /opt/kubectx
    Cloning into '/opt/kubectx'...
    remote: Enumerating objects: 1540, done.
    remote: Counting objects: 100% (469/469), done.
    remote: Compressing objects: 100% (110/110), done.
    remote: Total 1540 (delta 407), reused 360 (delta 359), pack-reused 1071 (from 2)
    #######################중략 ####################

    root@k8s-ctr:~# cat << "EOT" >> /root/.bash_profile
    source /root/kube-ps1/kube-ps1.sh
    KUBE_PS1_SYMBOL_ENABLE=true
    function get_cluster_short() {
      echo "$1" | cut -d . -f1
    }
    KUBE_PS1_CLUSTER_FUNCTION=get_cluster_short
    KUBE_PS1_SUFFIX=') '
    PS1='$(kube_ps1)'$PS1
    EOT
    ```

    ![image.png](attachment:f67d0bbe-b15c-4937-8fc0-b7a7bc377d9e:image.png)

- Flannel CNI 설치 ( flannel인터페이스 반드시확인)

    ```bash
    (⎈|HomeLab:default) root@k8s-ctr:~# kc describe pod -n kube-system kube-controller-manager-k8s-ctr
    Name:                 kube-controller-manager-k8s-ctr
    Namespace:            kube-system
    Priority:             2000001000
    Priority Class Name:  system-node-critical
    Node:                 k8s-ctr/192.168.10.100
    Start Time:           Wed, 21 Jan 2026 23:57:20 +0900
    Labels:               component=kube-controller-manager
                          tier=control-plane
    Annotations:          kubernetes.io/config.hash: 7314ab3f0ec6401c196ca943fad44a05
                          kubernetes.io/config.mirror: 7314ab3f0ec6401c196ca943fad44a05
                          kubernetes.io/config.seen: 2026-01-21T23:57:20.682508931+09:00
                          kubernetes.io/config.source: file
    Status:               Running
    SeccompProfile:       RuntimeDefault
    IP:                   192.168.10.100
    IPs:
      IP:           192.168.10.100
    Controlled By:  Node/k8s-ctr

    (⎈|HomeLab:default) root@k8s-ctr:~# helm repo add flannel https://flannel-io.github.io/flannel
    "flannel" has been added to your repositories
    (⎈|HomeLab:default) root@k8s-ctr:~#
    (⎈|HomeLab:default) root@k8s-ctr:~# kubectl create namespace kube-flannel
    namespace/kube-flannel created
    (⎈|HomeLab:default) root@k8s-ctr:~# cat << EOF > flannel.yaml
    podCidr: "10.244.0.0/16"
    flannel:
      cniBinDir: "/opt/cni/bin"
      cniConfDir: "/etc/cni/net.d"
      args:
      - "--ip-masq"
      - "--kube-subnet-mgr"
      **- "--iface=enp0s9"**
      backend: "vxlan"
    EOF
    (⎈|HomeLab:default) root@k8s-ctr:~# helm install flannel flannel/flannel --namespace kube-flannel --version 0.27.3 -f flannel.yaml
    NAME: flannel
    LAST DEPLOYED: Thu Jan 22 00:14:31 2026
    NAMESPACE: kube-flannel
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None

    ########### **"--iface=enp0s9"가 아닌 enp0s8로 되어있어서 재구동진행 ########**
    (⎈|HomeLab:default) root@k8s-ctr:~# helm upgrade flannel flannel/flannel   -n kube-flannel   -f flannel.yaml
    Release "flannel" has been upgraded. Happy Helming!
    NAME: flannel
    LAST DEPLOYED: Thu Jan 22 00:24:14 2026
    NAMESPACE: kube-flannel
    STATUS: deployed
    REVISION: 2
    TEST SUITE: None

    (⎈|HomeLab:default) root@k8s-ctr:~# kubectl get pod -n kube-system -owide
    NAME                              READY   STATUS    RESTARTS   AGE   IP               NODE      NOMINATED NODE   READINESS GATES
    coredns-668d6bf9bc-n2xwv          1/1     Running   0          28m   10.244.0.3       k8s-ctr   <none>           <none>
    coredns-668d6bf9bc-xsk2k          1/1     Running   0          28m   10.244.0.2       k8s-ctr   <none>           <none>
    ```

- 노드 정보 확인 기본 환경 정보 출력

    kubelet,kubeadm 설치시 커널파라미터가 변경되는게있따

    ex) kernnel.panic = 0 > 10 변경

    ```bash
    (⎈|HomeLab:default) root@k8s-ctr:~# systemctl is-active kubelet
    active
    (⎈|HomeLab:default) root@k8s-ctr:~# kc describe node
    Name:               k8s-ctr
    Roles:              control-plane
    Labels:             beta.kubernetes.io/arch=amd64
                        beta.kubernetes.io/os=linux
                        kubernetes.io/arch=amd64
                        kubernetes.io/hostname=k8s-ctr
                        kubernetes.io/os=linux
                        node-role.kubernetes.io/control-plane=
                        node.kubernetes.io/exclude-from-external-load-balancers=
    Annotations:        flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"3a:76:87:a6:2d:bf"}
                        flannel.alpha.coreos.com/backend-type: vxlan
                        flannel.alpha.coreos.com/kube-subnet-manager: true
                        flannel.alpha.coreos.com/public-ip: 192.168.10.100
                        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
                        node.alpha.kubernetes.io/ttl: 0
                        volumes.kubernetes.io/controller-managed-attach-detach: true
    CreationTimestamp:  Wed, 21 Jan 2026 23:57:17 +0900
    **Taints:             node-role.kubernetes.io/control-plane:NoSchedule**

    (⎈|HomeLab:default) root@k8s-ctr:~# cat /etc/sysconfig/kubelet
    tree /etc/kubernetes  | tee -a etc_kubernetes-2.txt
    tree /var/lib/kubelet | tee -a var_lib_kubelet-2.txt
    tree /run/containerd/ -L 3 | tee -a run_containerd-2.txt
    pstree -alnp | tee -a pstree-2.txt
    systemd-cgls --no-pager | tee -a systemd-cgls-2.txt
    lsns | tee -a lsns-2.txt
    ip addr | tee -a ip_addr-2.txt
    ss -tnlp | tee -a ss-2.txt
    df -hT | tee -a df-2.txt
    findmnt | tee -a findmnt-2.txt
    sysctl -a | tee -a sysctl-2.txt
    ```

- 인증서 확인

    ```bash
    (⎈|HomeLab:default) root@k8s-ctr:~# kc describe cm -n kube-system kubeadm-config
    Name:         kubeadm-config
    Namespace:    kube-system
    Labels:       <none>
    Annotations:  <none>

    Data
    ====
    ClusterConfiguration:
    ----
    apiServer: {}
    apiVersion: kubeadm.k8s.io/v1beta4
    **caCertificateValidityPeriod: 87600h0m0s
    certificateValidityPeriod: 8760h0m0s**
    certificatesDir: /etc/kubernetes/pki
    clusterName: kubernetes
    controllerManager: {}
    dns: {}
    encryptionAlgorithm: RSA-2048
    etcd:
      local:
        dataDir: /var/lib/etcd
    imageRepository: registry.k8s.io
    kind: ClusterConfiguration
    kubernetesVersion: v1.32.11
    networking:
      dnsDomain: cluster.local
      podSubnet: 10.244.0.0/16
      serviceSubnet: 10.96.0.0/16
    proxy: {}
    scheduler: {}

    BinaryData
    ====

    (⎈|HomeLab:default) root@k8s-ctr:~# kubeadm certs check-expiration
    [check-expiration] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
    [check-expiration] Use 'kubeadm init phase upload-config --config your-config.yaml' to re-upload it.

    CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
    admin.conf                 Jan 21, 2027 14:56 UTC   364d            ca                      no
    apiserver                  Jan 21, 2027 14:56 UTC   364d            ca                      no
    apiserver-etcd-client      Jan 21, 2027 14:56 UTC   364d            etcd-ca                 no
    apiserver-kubelet-client   Jan 21, 2027 14:56 UTC   364d            ca                      no
    controller-manager.conf    Jan 21, 2027 14:56 UTC   364d            ca                      no
    etcd-healthcheck-client    Jan 21, 2027 14:56 UTC   364d            etcd-ca                 no
    etcd-peer                  Jan 21, 2027 14:56 UTC   364d            etcd-ca                 no
    etcd-server                Jan 21, 2027 14:56 UTC   364d            etcd-ca                 no
    front-proxy-client         Jan 21, 2027 14:56 UTC   364d            front-proxy-ca          no
    scheduler.conf             Jan 21, 2027 14:56 UTC   364d            ca                      no
    super-admin.conf           Jan 21, 2027 14:56 UTC   364d            ca                      no

    CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
    ca                      Jan 19, 2036 14:56 UTC   9y              no
    etcd-ca                 Jan 19, 2036 14:56 UTC   9y              no
    front-proxy-ca          Jan 19, 2036 14:56 UTC   9y              no
    ```

- kubeconfig 확인

    ```bash
    (⎈|HomeLab:default) root@k8s-ctr:~# cat /etc/kubernetes/admin.conf
    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority-data:
    ##############중략###################    
    (⎈|HomeLab:default) root@k8s-ctr:~# cat /etc/kubernetes/super-admin.conf
    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority-data:
    ##############중략###################     
    (⎈|HomeLab:default) root@k8s-ctr:~# cat /etc/kubernetes/controller-manager.conf
    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority-data:
    ##############중략###################  

    (⎈|HomeLab:default) root@k8s-ctr:~# cat /var/lib/kubelet/pki/kubelet.crt | openssl x509 -text -noout
    Certificate:
        Data:
            Version: 3 (0x2)
            Serial Number: 4215227672604660729 (0x3a7f7f3c28ff53f9)
            Signature Algorithm: sha256WithRSAEncryption
            Issuer: CN=k8s-ctr-ca@1769007433
            Validity
                Not Before: Jan 21 13:57:13 2026 GMT
                Not After : Jan 21 13:57:13 2027 GMT
            Subject: CN=k8s-ctr@1769007433
            Subject Public Key Info:
                Public Key Algorithm: rsaEncryption
                    Public-Key: (2048 bit)
                    Modulus:

     (⎈|HomeLab:default) root@k8s-ctr:~# cat /var/lib/kubelet/pki/kubelet-client-current.pem | openssl x509 -text -noout
    Certificate:
        Data:
            Version: 3 (0x2)
            Serial Number: 5381870817680008066 (0x4ab03efe8adba382)
            Signature Algorithm: sha256WithRSAEncryption
            Issuer: CN=kubernetes
            Validity
                Not Before: Jan 21 14:51:44 2026 GMT
                Not After : Jan 21 14:56:44 2027 GMT
            Subject: O=system:nodes, CN=system:node:k8s-ctr
            Subject Public Key Info:
                Public Key Algorithm: rsaEncryption
    ```

- static pod 확인 : etcd, kube-apiserver, kube-scheduler,kube-controller-manager

    ```bash
    (⎈|HomeLab:default) root@k8s-ctr:~# tree /etc/kubernetes/manifests/
    /etc/kubernetes/manifests/
    ├── etcd.yaml
    ├── kube-apiserver.yaml
    ├── kube-controller-manager.yaml
    └── kube-scheduler.yaml

    (⎈|HomeLab:default) root@k8s-ctr:~# cat /var/lib/kubelet/config.yaml
    apiVersion: kubelet.config.k8s.io/v1beta1
    authentication:
      anonymous:
        enabled: false
      webhook:
        cacheTTL: 0s
        enabled: true
      x509:
        clientCAFile: /etc/kubernetes/pki/ca.crt

    (⎈|HomeLab:default) root@k8s-ctr:~# cat /var/lib/kubelet/kubeadm-flags.env
    KUBELET_KUBEADM_ARGS="--container-runtime-endpoint=unix:///run/containerd/containerd.sock --node-ip=192.168.10.100 --pod-infra-container-image=registry.k8s.io/pause:3.10"

    (⎈|HomeLab:default) root@k8s-ctr:~# cat /etc/kubernetes/manifests/etcd.yaml
    apiVersion: v1
    kind: Pod
    metadata:
      annotations:
        kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.10.100:2379
      creationTimestamp: null
      labels:
        component: etcd
        tier: control-plane
      name: etcd
      namespace: kube-system
    spec:
      containers:
      - command:
        - etcd
        - --advertise-client-urls=https://192.168.10.100:2379
        - --cert-file=/etc/kubernetes/pki/etcd/server.crt

     (⎈|HomeLab:default) root@k8s-ctr:~# kubectl get svc,ep
    NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
    service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   41m

    NAME                   ENDPOINTS             AGE
    endpoints/kubernetes   192.168.10.100:6443   41m

    (⎈|HomeLab:default) root@k8s-ctr:~# kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.podCIDR}{"\n"}{end}'
    k8s-ctr 10.244.0.0/24
    ```

- 필수 애드온 설치 확인

    ```bash
    (⎈|HomeLab:default) root@k8s-ctr:~# kubectl get deploy -n kube-system coredns -owide
    NAME      READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                                    SELECTOR
    coredns   2/2     2            2           41m   coredns      registry.k8s.io/coredns/coredns:v1.11.3   k8s-app=kube-dns
    (⎈|HomeLab:default) root@k8s-ctr:~# kubectl get pod -n kube-system -l k8s-app=kube-dns -owide
    NAME                       READY   STATUS    RESTARTS   AGE   IP           NODE      NOMINATED NODE   READINESS GATES
    coredns-668d6bf9bc-n2xwv   1/1     Running   0          41m   10.244.0.3   k8s-ctr   <none>           <none>
    coredns-668d6bf9bc-xsk2k   1/1     Running   0          41m   10.244.0.2   k8s-ctr   <none>           <none>
    (⎈|HomeLab:default) root@k8s-ctr:~# kubectl get svc,ep -n kube-system
    NAME               TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
    service/kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   41m

    NAME                 ENDPOINTS                                               AGE
    endpoints/kube-dns   10.244.0.2:53,10.244.0.3:53,10.244.0.2:53 + 3 more...   41m
    (⎈|HomeLab:default) root@k8s-ctr:~# curl -s http://10.96.0.10:9153/metrics | head
    # HELP coredns_build_info A metric with a constant '1' value labeled by version, revision, and goversion from which CoreDNS was built.
    # TYPE coredns_build_info gauge
    coredns_build_info{goversion="go1.21.11",revision="a6338e9",version="1.11.3"} 1
    # HELP coredns_cache_entries The number of elements in the cache.
    # TYPE coredns_cache_entries gauge
    coredns_cache_entries{server="dns://:53",type="denial",view="",zones="."} 1
    coredns_cache_entries{server="dns://:53",type="success",view="",zones="."} 0
    # HELP coredns_cache_misses_total The count of cache misses. Deprecated, derive misses from cache hits/requests counters.
    # TYPE coredns_cache_misses_total counter
    coredns_cache_misses_total{server="dns://:53",view="",zones="."} 1

    (⎈|HomeLab:default) root@k8s-ctr:~# kc describe cm -n kube-system coredns
    Name:         coredns
    Namespace:    kube-system
    Labels:       <none>
    Annotations:  <none>

    Events:  <none>

    (⎈|HomeLab:default) root@k8s-ctr:~# cat /etc/resolv.conf
    # Generated by NetworkManager
    nameserver 8.8.8.8
    (⎈|HomeLab:default) root@k8s-ctr:~# kubectl get ds -n kube-system -owide
    NAME         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE   CONTAINERS   IMAGES                                SELECTOR
    kube-proxy   1         1         1       1            1           kubernetes.io/os=linux   43m   kube-proxy   registry.k8s.io/kube-proxy:v1.32.11   k8s-app=kube-proxy
    (⎈|HomeLab:default) root@k8s-ctr:~# kubectl get pod -n kube-system -l k8s-app=kube-proxy -owide
    NAME               READY   STATUS    RESTARTS   AGE   IP               NODE      NOMINATED NODE   READINESS GATES
    kube-proxy-9dpcs   1/1     Running   0          42m   192.168.10.100   k8s-ctr   <none>           <none>
    (⎈|HomeLab:default) root@k8s-ctr:~# kc describe cm -n kube-system kube-proxy
    Name:         kube-proxy
    Namespace:    kube-system
    Labels:       app=kube-proxy
    Annotations:  kubeadm.kubernetes.io/component-config.hash: sha256:cdf765c8ace05d9c91a233c33ad96de755530f97919a928be185843e99db7bd7

    Data
    ====
    config.conf:
    ----
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    bindAddress: 0.0.0.0
    bindAddressHardFail: false
    clientConnection:
      acceptContentTypes: ""
      burst: 0
      contentType: ""
      kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
      qps: 0

    ====

    Events:  <none>
    (⎈|HomeLab:default) root@k8s-ctr:~# curl 127.0.0.1:10249/healthz ; echo
    ok

    (⎈|HomeLab:default) root@k8s-ctr:~# dnf install -y conntrack-tools
    Last metadata expiration check: 0:37:12 ago on Thu 22 Jan 2026 12:03:57 AM KST.
    Dependencies resolved.
    ========================================================================================================================
     Package                              Architecture         Version                        Repository               Size
    ========================================================================================================================
    Installing:
     conntrack-tools                      x86_64               1.4.8-3.el10                   appstream               235 k
    Installing dependencies:
     libnetfilter_cthelper                x86_64               1.0.1-1.el10                   appstream                23 k
     libnetfilter_cttimeout               x86_64               1.0.0-27.el10                  appstream                23 k
     libnetfilter_queue                   x86_64               1.0.5-9.el10                   appstream                28 k

    Transaction Summary
    ========================================================================================================================
    Install  4 Packages

    ######중략#######                                                    
    ```
  1. k8s-w1,w2 설정

    • 사전설정

        접속후 동일하게 세팅진행
        PS C:\Users\bom\Desktop\스터디\week3> vagrant ssh k8s-w1
      
        This system is built by the Bento project by Chef Software
        More information can be found at https://github.com/chef/bento
      
        Use of this system is acceptance of the OS vendor EULA and License Agreements.
        vagrant@k8s-w1:~$ echo "sudo su -" >> /home/vagrant/.bashrc
        vagrant@k8s-w1:~$ sudo su -
        root@k8s-w1:~# timedatectl set-local-rtc 0
        root@k8s-w1:~# timedatectl set-timezone Asia/Seoul
        root@k8s-w1:~# setenforce 0
        root@k8s-w1:~# sed -i 's/^SELINUX=enforcing/SELINUX=permissive/' /etc/selinux/config
        root@k8s-w1:~# systemctl disable --now firewalld
        Removed '/etc/systemd/system/multi-user.target.wants/firewalld.service'.
        Removed '/etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service'.
        root@k8s-w1:~# swapoff -a
        root@k8s-w1:~# sed -i '/swap/d' /etc/fstab
        root@k8s-w1:~# modprobe overlay
        root@k8s-w1:~# modprobe br_netfilter
        root@k8s-w1:~# cat <<EOF | tee /etc/modules-load.d/k8s.conf
        overlay
        br_netfilter
        EOF
        overlay
        br_netfilter
        root@k8s-w1:~# cat <<EOF | tee /etc/sysctl.d/k8s.conf
        net.bridge.bridge-nf-call-iptables  = 1
        net.bridge.bridge-nf-call-ip6tables = 1
        net.ipv4.ip_forward                 = 1
        EOF
        net.bridge.bridge-nf-call-iptables  = 1
        net.bridge.bridge-nf-call-ip6tables = 1
        net.ipv4.ip_forward                 = 1
        root@k8s-w1:~# sysctl --system >/dev/null 2>&1
        root@k8s-w1:~# sed -i '/^127\.0\.\(1\|2\)\.1/d' /etc/hosts
        cat << EOF >> /etc/hosts
        192.168.10.100 k8s-ctr
        192.168.10.101 k8s-w1
        192.168.10.102 k8s-w2
        EOF
    • CRI설치

        root@k8s-w2:~# dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
        Adding repo from: https://download.docker.com/linux/centos/docker-ce.repo
        root@k8s-w2:~# dnf install -y containerd.io-2.1.5-1.el10
        Docker CE Stable - x86_64                                                               282 kB/s |  16 kB     00:00
        Dependencies resolved.
        ========================================================================================================================
         Package                      Architecture          Version                       Repository                       Size
        ========================================================================================================================
        Installing:
         containerd.io                x86_64                2.1.5-1.el10                  docker-ce-stable                 34 M
      
        Transaction Summary
        ========================================================================================================================
        Install  1 Package
      
        root@k8s-w2:~# containerd config default | tee /etc/containerd/config.toml
        version = 3
        root = '/var/lib/containerd'
        state = '/run/containerd'
        temp = ''
        disabled_plugins = []
        required_plugins = []
        oom_score = 0
        imports = []
        ###########중략###########
      
        root@k8s-w2:~# sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
        root@k8s-w2:~# systemctl daemon-reload
        root@k8s-w2:~# systemctl enable --now containerd
        Created symlink '/etc/systemd/system/multi-user.target.wants/containerd.service' → '/usr/lib/systemd/system/containerd.service'.
      

      -

    • kubeadm, kubelet 및 kubectl 설치

        root@k8s-w2:~# cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
        [kubernetes]
        name=Kubernetes
        baseurl=https://pkgs.k8s.io/core:/stable:/v1.32/rpm/
        enabled=1
        gpgcheck=1
        gpgkey=https://pkgs.k8s.io/core:/stable:/v1.32/rpm/repodata/repomd.xml.key
        exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
        EOF
        [kubernetes]
        name=Kubernetes
        baseurl=https://pkgs.k8s.io/core:/stable:/v1.32/rpm/
        enabled=1
        gpgcheck=1
        gpgkey=https://pkgs.k8s.io/core:/stable:/v1.32/rpm/repodata/repomd.xml.key
        exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
        root@k8s-w2:~# dnf install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
        Kubernetes                                                                               38 kB/s |  19 kB     00:00
        Dependencies resolved.
        ========================================================================================================================
         Package                       Architecture          Version                            Repository                 Size
        ========================================================================================================================
        Installing:
         kubeadm                       x86_64                1.32.11-150500.1.1                 kubernetes                 12 M
         kubectl                       x86_64                1.32.11-150500.1.1                 kubernetes                 11 M
         kubelet                       x86_64                1.32.11-150500.1.1                 kubernetes                 15 M
        Installing dependencies:
         cri-tools                     x86_64                1.32.0-150500.1.1                  kubernetes                7.1 M
         kubernetes-cni                x86_64                1.6.0-150500.1.1                   kubernetes                8.0 M
      
        Transaction Summary
        ========================================================================================================================
        Install  5 Packages
      
        root@k8s-w2:~# systemctl enable --now kubelet
        Created symlink '/etc/systemd/system/multi-user.target.wants/kubelet.service' → '/usr/lib/systemd/system/kubelet.service'.
        root@k8s-w2:~# cat << EOF > /etc/crictl.yaml
        runtime-endpoint: unix:///run/containerd/containerd.sock
        image-endpoint: unix:///run/containerd/containerd.sock
        EOF
    • kubeadm k8s join

        root@k8s-w2:~# crictl images
        crictl ps
        cat /etc/sysconfig/kubelet
        tree /etc/kubernetes  | tee -a etc_kubernetes-1.txt
        tree /var/lib/kubelet | tee -a var_lib_kubelet-1.txt
        tree /run/containerd/ -L 3 | tee -a run_containerd-1.txt
        pstree -alnp | tee -a pstree-1.txt
        systemd-cgls --no-pager | tee -a systemd-cgls-1.txt
        lsns | tee -a lsns-1.txt
        ip addr | tee -a ip_addr-1.txt
        ss -tnlp | tee -a ss-1.txt
        df -hT | tee -a df-1.txt
        findmnt | tee -a findmnt-1.txt
        sysctl -a | tee -a sysctl-1.txt
      
        root@k8s-w2:~# NODEIP=$(ip -4 addr show enp0s8 | grep -oP '(?<=inet\s)\d+(\.\d+){3}')
      
        root@k8s-w2:~# NODEIP=$(ip -4 addr show enp0s8 | grep -oP '(?<=inet\s)\d+(\.\d+){3}')
        root@k8s-w2:~# cat << EOF > kubeadm-join.yaml
        apiVersion: kubeadm.k8s.io/v1beta4
        kind: JoinConfiguration
        discovery:
          bootstrapToken:
            token: "123456.1234567890123456"
            apiServerEndpoint: "192.168.10.100:6443"
            unsafeSkipCAVerification: true
        nodeRegistration:
          criSocket: "unix:///run/containerd/containerd.sock"
          kubeletExtraArgs:
            - name: node-ip
              value: "$NODEIP"
        EOF
      
        root@k8s-w2:~# kubeadm join --config="kubeadm-join.yaml"
        [preflight] Running pre-flight checks
        [preflight] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
        [preflight] Use 'kubeadm init phase upload-config --config your-config.yaml' to re-upload it.
        [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
        [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
        [kubelet-start] Starting the kubelet
        [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
        [kubelet-check] The kubelet is healthy after 505.002436ms
        [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
      
        This node has joined the cluster:
        * Certificate signing request was sent to apiserver and a response was received.
        * The Kubelet was informed of the new secure connection details.
      
        Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
      
        root@k8s-w2:~# curl -s -k https://192.168.10.100:6443/api/v1/namespaces/kube-public/configmaps/cluster-info | jq
        {
          "kind": "ConfigMap",
          "apiVersion": "v1",
          "metadata": {
            "name": "cluster-info",
            "namespace": "kube-public",
            "uid": "4f8df4c9-5dc4-4734-b1b7-e5a803feab7f",
    • k8s-w1/w2 정보확인

        (⎈|HomeLab:default) root@k8s-ctr:~# kubectl get node -owide
      
        NAME      STATUS   ROLES           AGE   VERSION    INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                        KERNEL-VERSION                 CONTAINER-RUNTIME
        k8s-ctr   Ready    control-plane   2d    v1.32.11   192.168.10.100   <none>        Rocky Linux 10.0 (Red Quartz)   6.12.0-55.39.1.el10_0.x86_64   containerd://2.1.5
        k8s-w1    Ready    <none>          58s   v1.32.11   192.168.10.101   <none>        Rocky Linux 10.0 (Red Quartz)   6.12.0-55.39.1.el10_0.x86_64   containerd://2.1.5
        k8s-w2    Ready    <none>          53s   v1.32.11   192.168.10.102   <none>        Rocky Linux 10.0 (Red Quartz)   6.12.0-55.39.1.el10_0.x86_64   containerd://2.1.5
      
        (⎈|HomeLab:default) root@k8s-ctr:~# kc describe node k8s-w2
        Name:               k8s-w2
        Roles:              <none>
        Labels:             beta.kubernetes.io/arch=amd64
                            beta.kubernetes.io/os=linux
                            kubernetes.io/arch=amd64
                            kubernetes.io/hostname=k8s-w2
                            kubernetes.io/os=linux
        Annotations:        flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"d2:83:ae:e6:6e:a0"}
                            flannel.alpha.coreos.com/backend-type: vxlan
                            flannel.alpha.coreos.com/kube-subnet-manager: true
                            flannel.alpha.coreos.com/public-ip: 192.168.10.102
                            kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
                            node.alpha.kubernetes.io/ttl: 0
                            volumes.kubernetes.io/controller-managed-attach-detach: true
        CreationTimestamp:  Fri, 23 Jan 2026 23:56:42 +0900
        Taints:             <none>
  2. 모니터링 툴 설치진행

    • metric-server설치진행

        (⎈|HomeLab:default) root@k8s-ctr:~# helm repo add metrics-server https://kubernetes-sigs.github.io/metrics-server/
        "metrics-server" has been added to your repositories
        (⎈|HomeLab:default) root@k8s-ctr:~# helm upgrade --install metrics-server metrics-server/metrics-server --set 'args[0]=--kubelet-insecure-tls' -n kube-system
        Release "metrics-server" does not exist. Installing it now.
        NAME: metrics-server
        LAST DEPLOYED: Sat Jan 24 00:14:07 2026
        NAMESPACE: kube-system
        STATUS: deployed
        REVISION: 1
        TEST SUITE: None
        NOTES:
        ***********************************************************************
        * Metrics Server                                                      *
        ***********************************************************************
          Chart version: 3.13.0
          App version:   0.8.0
          Image tag:     registry.k8s.io/metrics-server/metrics-server:v0.8.0
        ***********************************************************************
    • kube-prometheus-stack 설치

        (⎈|HomeLab:default) root@k8s-ctr:~# helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
      
        (⎈|HomeLab:default) root@k8s-ctr:~# helm list -n monitoring
        NAME                    NAMESPACE       REVISION        UPDATED                                 STATUS          CHART                           APP VERSION
        kube-prometheus-stack   monitoring      1               2026-01-24 00:14:59.530116612 +0900 KST deployed        kube-prometheus-stack-80.13.3   v0.87.1
        (⎈|HomeLab:default) root@k8s-ctr:~# kubectl get pod,svc,ingress,pvc -n monitoring
        NAME                                                            READY   STATUS              RESTARTS   AGE
        pod/kube-prometheus-stack-grafana-5cb7c586f9-7ntdf              0/3     ContainerCreating   0          18s
        pod/kube-prometheus-stack-kube-state-metrics-7846957b5b-gjccp   0/1     Running             0          18s
        pod/kube-prometheus-stack-operator-584f446c98-nsm8c             0/1     ContainerCreating   0          18s
        pod/kube-prometheus-stack-prometheus-node-exporter-p7j45        1/1     Running             0          18s
        pod/kube-prometheus-stack-prometheus-node-exporter-slqhj        1/1     Running             0          18s
      
        NAME                                                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                         AGE
        service/kube-prometheus-stack-alertmanager               ClusterIP   10.96.42.69     <none>        9093/TCP,8080/TCP               18s
        service/kube-prometheus-stack-grafana                    NodePort    10.96.172.144   <none>        80:30002/TCP                    18s
        service/kube-prometheus-stack-kube-state-metrics         ClusterIP   10.96.34.132    <none>        8080/TCP                        18s
        service/kube-prometheus-stack-operator                   ClusterIP   10.96.116.217   <none>        443/TCP                         18s
        service/kube-prometheus-stack-prometheus                 NodePort    10.96.30.242    <none>        9090:30001/TCP,8080:30485/TCP   18s
        service/kube-prometheus-stack-prometheus-node-exporter   ClusterIP   10.96.83.43     <none>        9100/TCP  
      
        (⎈|HomeLab:default) root@k8s-ctr:~# kubectl exec -it -n monitoring deploy/kube-prometheus-stack-grafana -- grafana --version
        grafana version 12.3.1
        (⎈|HomeLab:default) root@k8s-ctr:~# kubectl exec -it sts/prometheus-kube-prometheus-stack-prometheus -n monitoring -c prometheus -- prometheus --version
        prometheus, version 3.9.1 (branch: HEAD, revision: 9ec59baffb547e24f1468a53eb82901e58feabd8)
          build user:       root@61c3a9212c9e
          build date:       20260107-16:08:09
          go version:       go1.25.5
          platform:         linux/amd64
          tags:             netgo,builtinassets
      

      image.png

    • k8s 대시보드 확인

      image.png

    • Certificate exporter 설치 및 화면 구성

        (⎈|HomeLab:default) root@k8s-ctr:~# cat << EOF > cert-export-values.yaml
        # -- hostPaths Exporter
        hostPathsExporter:
          hostPathVolumeType: Directory
      
          daemonSets:
            cp:
              nodeSelector:
                node-role.kubernetes.io/control-plane: ""
              tolerations:
              - effect: NoSchedule
                key: node-role.kubernetes.io/control-plane
                operator: Exists
      
        (⎈|HomeLab:default) root@k8s-ctr:~# helm install x509-certificate-exporter enix/x509-certificate-exporter -n monitoring --values cert-export-values.yaml
        NAME: x509-certificate-exporter
        LAST DEPLOYED: Sat Jan 24 00:34:37 2026
        NAMESPACE: monitoring
        STATUS: deployed
        REVISION: 1
        TEST SUITE: None
        (⎈|HomeLab:default) root@k8s-ctr:~# helm list -n monitoring
        NAME                            NAMESPACE       REVISION        UPDATED                                 STATUS         CHART                            APP VERSION
        kube-prometheus-stack           monitoring      1               2026-01-24 00:14:59.530116612 +0900 KST deployed       kube-prometheus-stack-80.13.3    v0.87.1
        x509-certificate-exporter       monitoring      1               2026-01-24 00:34:37.079222386 +0900 KST deployed       x509-certificate-exporter-3.19.1 3.19.1
      
        (⎈|HomeLab:default) root@k8s-ctr:~# curl -s 10.244.0.4:9793/metrics | grep '^x509' | head -n 3
        x509_cert_expired{filename="apiserver-etcd-client.crt",filepath="/etc/kubernetes/pki/apiserver-etcd-client.crt",issuer_CN="etcd-ca",serial_number="5085519134918927718",subject_CN="kube-apiserver-etcd-client"} 0
        x509_cert_expired{filename="apiserver.crt",filepath="/etc/kubernetes/pki/apiserver.crt",issuer_CN="kubernetes",serial_number="8664196532623716359",subject_CN="kube-apiserver"} 0
        x509_cert_expired{filename="ca.crt",filepath="/etc/kubernetes/pki/ca.crt",issuer_CN="kubernetes",serial_number="303979118069449790",subject_CN="kubernetes"} 0
      

      image.png

      image.png

  3. 인증서 갱신

     (⎈|HomeLab:default) root@k8s-ctr:~# **kc describe cm -n kube-system kubeadm-config | grep -i cert**
     *caCertificateValidityPeriod: 87600h0m0s
     certificateValidityPeriod: 8760h0m0s*
    
     (⎈|HomeLab:default) root@k8s-ctr:~# **kubeadm certs check-expiration -v 6**
    
     **cat /etc/kubernetes/pki/apiserver.crt | openssl x509 -text -noout**
     *Certificate:
         Data:
             Version: 3 (0x2)
             Serial Number: 9019049356910942135 (0x7d2a199aea6457b7)
             Signature Algorithm: sha256WithRSAEncryption
             Issuer: CN=kubernetes
             **Validity**
                 Not Before: Jan 24 00:18:08 2026 GMT
                 **Not After : Jan 24 00:23:08 2027 GMT**
             Subject: CN=kube-apiserver*
     ====================중략===============================

K8S Upgrade by kubeadm

  • 쿠버네티스는 1년에 3개의 마이너 버전 출시하며 → 최근 3개 버전 패치를 지원 해준다.

버젼 관련 정보

  1. HA 클러스터에서는 가장 낮은 kube-apiserver 버전이 모든 기준
  2. kube-apiserver(HA)는 N / N-1 까지만 가능하며 업그레이드는 apiserver부터 진행
  3. kubelet / kube-proxy는 apiserver보다 신버젼 불가, 최대 3 마이너 OLD 허용
  4. kcm·scheduler·ccm은 apiserver보다 신버젼 불가 1 마이너 OLD만 허용
  5. kubectl 은 apiserver 기준 ±1 마이너 버전까지 허용
실습환경 배포 
C:\Users\bom\Desktop\스터디\upgrade_week3>vagrant up
Bringing machine 'k8s-ctr' up with 'virtualbox' provider...
Bringing machine 'k8s-w1' up with 'virtualbox' provider...
Bringing machine 'k8s-w2' up with 'virtualbox' provider...
  1. 사전준비

    • kube-prometheus-stack 설치

        (⎈|HomeLab:N/A) root@k8s-ctr:~#kubectl exec -it sts/prometheus-kube-prometheus-stack-prometheus -n monitoring -c prometheus -- **prometheus --version**
        *prometheus, version 3.9.1*
      
        ****(⎈|HomeLab:N/A) root@k8s-ctr:~#kubectl exec -it -n monitoring deploy/kube-prometheus-stack-grafana -- **grafana --version**
        *grafana version 12.3.1*
    • etcd백업

        ##etcd백업
        (⎈|HomeLab:N/A) root@k8s-ctr:~# crictl images | grep etcd
        registry.k8s.io/etcd                      3.5.24-0            8cb12dd0c3e42       23.7MB
        (⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl exec -n kube-system etcd-k8s-ctr -- etcdctl version
        etcdctl version: 3.5.24
        API version: 3.5
        (⎈|HomeLab:N/A) root@k8s-ctr:~# ETCD_VER=3.5.24
        (⎈|HomeLab:N/A) root@k8s-ctr:~# ARCH=amd64
        (⎈|HomeLab:N/A) root@k8s-ctr:~#
        (⎈|HomeLab:N/A) root@k8s-ctr:~# curl -L https://github.com/etcd-io/etcd/releases/download/v${ETCD_VER}/etcd-v${ETCD_VER}-linux-${ARCH}.tar.gz -o /tmp/etcd-v${ETCD_VER}.tar.gz
          % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                         Dload  Upload   Total   Spent    Left  Speed
          0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
        100 21.3M  100 21.3M    0     0  17.4M      0  0:00:01  0:00:01 --:--:-- 17.4M
        (⎈|HomeLab:N/A) root@k8s-ctr:~# mkdir -p /tmp/etcd-download
        (⎈|HomeLab:N/A) root@k8s-ctr:~# tar xzvf /tmp/etcd-v${ETCD_VER}.tar.gz -C /tmp/etcd-download --strip-components=1
        etcd-v3.5.24-linux-amd64/Documentation/
        etcd-v3.5.24-linux-amd64/Documentation/dev-guide/
        etcd-v3.5.24-linux-amd64/Documentation/dev-guide/apispec/
        etcd-v3.5.24-linux-amd64/Documentation/dev-guide/apispec/swagger/
        etcd-v3.5.24-linux-amd64/Documentation/dev-guide/apispec/swagger/v3lock.swagger.json
        etcd-v3.5.24-linux-amd64/Documentation/dev-guide/apispec/swagger/v3election.swagger.json
        etcd-v3.5.24-linux-amd64/Documentation/dev-guide/apispec/swagger/rpc.swagger.json
        etcd-v3.5.24-linux-amd64/Documentation/README.md
        etcd-v3.5.24-linux-amd64/README-etcdutl.md
        etcd-v3.5.24-linux-amd64/READMEv2-etcdctl.md
        etcd-v3.5.24-linux-amd64/README-etcdctl.md
        etcd-v3.5.24-linux-amd64/README.md
        etcd-v3.5.24-linux-amd64/etcdutl
        etcd-v3.5.24-linux-amd64/etcdctl
        etcd-v3.5.24-linux-amd64/etcd
        (⎈|HomeLab:N/A) root@k8s-ctr:~# mv /tmp/etcd-download/etcdctl /usr/local/bin/
        mv /tmp/etcd-download/etcdutl /usr/local/bin/
        chown root:root /usr/local/bin/etcdctl
        chown root:root /usr/local/bin/etcdutl
        (⎈|HomeLab:N/A) root@k8s-ctr:~# etcdctl version
        etcdctl version: 3.5.24
        API version: 3.5
      
        (⎈|HomeLab:N/A) root@k8s-ctr:~# etcdctl snapshot save /backup/etcd-snapshot-$(date +%F).db
        {"level":"info","ts":"2026-01-24T01:04:27.603103+0900","caller":"snapshot/v3_snapshot.go:65","msg":"created temporary db file","path":"/backup/etcd-snapshot-2026-01-24.db.part"}
  2. Flannel CNI업그레이드

    • 이미지 다운로드

        ⎈|HomeLab:N/A) root@k8s-ctr:~# crictl images | grep flannel
        ghcr.io/flannel-io/flannel-cni-plugin     v1.7.1-flannel1     cca2af40a4a9e       4.88MB
        ghcr.io/flannel-io/flannel                v0.27.3             5de71980e553f       34MB
        ghcr.io/flannel-io/flannel                v0.27.4             e83704a177312       34.1MB
    • Helm업그레이드

      
        (⎈|HomeLab:N/A) root@k8s-ctr:~# cat << EOF > flannel.yaml
        podCidr: "10.244.0.0/16"
        flannel:
          cniBinDir: "/opt/cni/bin"
          cniConfDir: "/etc/cni/net.d"
          args:
          - "--ip-masq"
          - "--kube-subnet-mgr"
          - "--iface=enp0s9"
          backend: "vxlan"
        image:
          tag: v0.27.4
        EOF
      
        (⎈|HomeLab:N/A) root@k8s-ctr:~# helm upgrade flannel flannel/flannel -n kube-flannel -f flannel.yaml --version 0.27.4
        Release "flannel" has been upgraded. Happy Helming!
        NAME: flannel
        LAST DEPLOYED: Sat Jan 24 01:07:43 2026
        NAMESPACE: kube-flannel
        STATUS: deployed
        REVISION: 2
        TEST SUITE: None
  3. Rockey Linux OS 마이너버젼 업그레이드

    • 업그레이드

        (⎈|HomeLab:N/A) root@k8s-ctr:~# rpm -q containerd.io
        containerd.io-2.1.5-1.el10.aarch64
      
        (⎈|HomeLab:N/A) root@k8s-ctr:~# dnf install -y 'dnf-command(versionlock)'
      
        (⎈|HomeLab:N/A) root@k8s-ctr:~# dnf versionlock add containerd.io
        Adding versionlock on: containerd.io-0:2.1.5-1.el10.*
      
        (⎈|HomeLab:N/A) root@k8s-ctr:~# dnf versionlock list
        containerd.io-0:2.1.5-1.el10.*
        -------------------------------------------------------------
      
        (⎈|HomeLab:N/A) root@k8s-ctr:~# dnf -y update
        Running scriptlet: kernel-modules-core-6.12.0-124.27.1.el10_1.aarch64                                           584/584 
      
        (⎈|HomeLab:N/A) root@k8s-ctr:~# reboot
        (⎈|HomeLab:N/A) root@k8s-ctr:~# ping 192.168.10.100
        64 bytes from 192.168.10.100: icmp_seq=47 ttl=64 time=0.454 ms
        64 bytes from 192.168.10.100: icmp_seq=48 ttl=64 time=0.532 ms
        Request timeout for icmp_seq 49                                 # 재부팅 시점에 ping 통신 불가
        Request timeout for icmp_seq 50
      
        (⎈|HomeLab:N/A) root@k8s-ctr:~# kubectl get pod -A -owide | grep k8s-ctr
        default        curl-pod                                                    1/1     Running   1 (2m25s ago)   61m    10.244.0.3       k8s-ctr   <none>           <none>
        kube-flannel   kube-flannel-ds-f2572                                       1/1     Running   1 (2m25s ago)   15m    192.168.10.100   k8s-ctr   <none>           <none>
        kube-system    coredns-668d6bf9bc-ctkmb                                    1/1     Running   1 (2m25s ago)   142m   10.244.0.2       k8s-ctr   <none>           <none>
        kube-system    etcd-k8s-ctr                                                1/1     Running   1 (2m25s ago)   142m   192.168.10.100   k8s-ctr   <none>           <none>
        kube-system    kube-apiserver-k8s-ctr                                      1/1     Running   1 (2m25s ago)   142m   192.168.10.100   k8s-ctr   <none>           <none>
        kube-system    kube-controller-manager-k8s-ctr                             1/1     Running   1 (2m25s ago)   142m   192.168.10.100   k8s-ctr   <none>           <none>
        kube-system    kube-proxy-wwd9c                                            1/1     Running   1 (2m25s ago)   142m   192.168.10.100   k8s-ctr   <none>           <none>
        kube-system    kube-scheduler-k8s-ctr                                      1/1     Running   1 (2m25s ago)   142m   192.168.10.100   k8s-ctr   <none>           <none>
        monitoring     kube-prometheus-stack-prometheus-node-exporter-6k5rk        1/1     Running   1 (2m25s ago)   36m    192.168.10.100   k8s-ctr   <none>           <none>
      
  4. kubeadm,kubelet,kubectl 업그레이드

    • 고려사항(업그레이드순서)

      • kubeadm 업그레이드
      • kubelet / kubectl 업그레이드
      • kubelet 재시작
    • containerd 관련 유의사항

      • containerd 재시작 불필요함
    • 업그레이드 진행

        (⎈|HomeLab:N/A) root@k8s-ctr:~# **dnf list --showduplicates kubeadm --disableexcludes=kubernetes**
        *Installed Packages
        kubeadm.aarch64                                       **1.32.11-150500.1.1**                                       @kubernetes
        Available Packages*
        (⎈|HomeLab:N/A) root@k8s-ctr:~# dnf install -y --disableexcludes=kubernetes kubeadm-**1.33.7-150500.1.1**
      
        (⎈|HomeLab:N/A) root@k8s-ctr:~# **kubeadm upgrade plan**
        *[upgrade/versions] Target version: **v1.33.7**
        [upgrade/versions] Latest version in the v1.32 series: **v1.32.11***
      
        - etcd schema 확인
        - kube-apiserver / controller-manager / scheduler static pod 교체
        - CoreDNS 업그레이드
        - kube-proxy 업그레이드
      
        ******(⎈|HomeLab:N/A) root@k8s-ctr:~# **kubeadm config images pull**
        *[config/images] Pulled registry.k8s.io/kube-apiserver:v1.33.7
        [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.33.7
      
        *****(⎈|HomeLab:N/A) root@k8s-ctr:~# crictl pull registry.k8s.io/kube-proxy:v1.33.7
      
        ****(⎈|HomeLab:N/A) root@k8s-ctr:~# crictl pull registry.k8s.io/coredns/coredns:v1.12.0
      
        ****(⎈|HomeLab:N/A) ****root@k8s-ctr:~# **kubeadm upgrade apply v1.33.7**
        *[upgrade] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
        [upgrade] Use 'kubeadm init phase upload-config --config your-config-file' to re-upload it.
        [upgrade/preflight] Running preflight checks
        [upgrade] Running cluster health checks
        [upgrade/preflight] You have chosen to upgrade the cluster version to "v1.33.7"
        [upgrade/versions] Cluster version: v1.32.11
        [upgrade/versions] kubeadm version: v1.33.7
        [upgrade] Are you sure you want to proceed? [y/N]: **y***
      
        (⎈|HomeLab:N/A) ****root@k8s-ctr:~# dnf install -y --disableexcludes=kubernetes kubeadm-**1.34.3-150500.1.1**
      
        (⎈|HomeLab:N/A) ****root@k8s-ctr:~# **kubeadm upgrade plan**
        *[upgrade/versions] Target version: v1.34.3
        [upgrade/versions] Latest version in the v1.33 series: v1.33.7*
      
        (⎈|HomeLab:N/A) ****root@k8s-ctr:~#**crictl pull registry.k8s.io/kube-proxy:v1.34.3**
        (⎈|HomeLab:N/A) ****root@k8s-ctr:~#**crictl pull registry.k8s.io/coredns/coredns:v1.12.1**
        (⎈|HomeLab:N/A) ****root@k8s-ctr:~#**crictl pull registry.k8s.io/pause:3.10.1**
      
        (⎈|HomeLab:N/A) ****root@k8s-ctr:~#**kubeadm upgrade apply v1.34.3 --yes**
        *[upgrade] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
        [upgrade] Use 'kubeadm init phase upload-config kubeadm --config your-config-file' to re-upload it.
        [upgrade/preflight] Running preflight checks
        [upgrade] Running cluster health checks*
      
        (⎈|HomeLab:N/A) ****root@k8s-ctr:~# dnf install -y --disableexcludes=kubernetes **kubelet-1.34.3-150500.1.1 kubectl-1.34.3-150500.1.1**
        *Upgrading:
         kubectl                  aarch64                  1.33.7-150500.1.1                    kubernetes                  9.7 M
         kubelet                  aarch64                  1.33.7-150500.1.1* 
      
         (⎈|HomeLab:N/A) ****root@k8s-ctr:~# **systemctl daemon-reload**
         (⎈|HomeLab:N/A) ****root@k8s-ctr:~# **systemctl restart kubelet**        **

'Study > K8S-Deploy' 카테고리의 다른 글

K8S)2주차 과제  (0) 2026.01.15
K8S) 1주차 과제  (0) 2026.01.08

실습 환경 구성

  • 실습환경배포
 ##파일 다운로드
  /drives/c/Users/bom/Desktop/스터디/ansible 
  13/01/2026   23:05.21  curl -O https://raw.githubusercontent.com/gasida/                                                        vagrant-lab/refs/heads/main/ansible/Vagrantfile
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  2458  100  2458    0     0   8081      0 --:--:-- --:--:-- --:--:--  8112
                                                                               ✓

  /drives/c/Users/bom/Desktop/스터디/ansible 
  13/01/2026   23:05.27  curl -O https://raw.githubusercontent.com/gasida/                                                        vagrant-lab/refs/heads/main/ansible/init_cfg.sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1072  100  1072    0     0   3975      0 --:--:-- --:--:-- --:--:--  3985
                                                                               ✓

  /drives/c/Users/bom/Desktop/스터디/ansible 
  13/01/2026   23:05.32 
                                                                               ✓

  /drives/c/Users/bom/Desktop/스터디/ansible 
  13/01/2026   23:05.32  curl -O https://raw.githubusercontent.com/gasida/                                                        vagrant-lab/refs/heads/main/ansible/init_cfg2.sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100  1074  100  1074    0     0   3781      0 --:--:-- --:--:-- --:--:--  3795
                                                                               ✓                                     
#파일 다운로드 후 vagrant up
PS C:\Users\bom\Desktop\스터디\ansible> vagrant up
Bringing machine 'server' up with 'virtualbox' provider...
Bringing machine 'tnode1' up with 'virtualbox' provider...
Bringing machine 'tnode2' up with 'virtualbox' provider...
Bringing machine 'tnode3' up with 'virtualbox' provider...
==> server: Preparing master VM for linked clones...
    server: This is a one time operation. Once the master VM is prepared,
    server: it will be used as a base for linked clones, making the creation
    server: of new VMs take milliseconds on a modern system.
==> server: Importing base box 'bento/ubuntu-24.04'...
==> server: Cloning VM...
==> server: Matching MAC address for NAT networking...
==> server: Checking if box 'bento/ubuntu-24.04' version '202510.26.0' is up to date...
==> server: Setting the name of the VM: server
==> server: Clearing any previously set network interfaces...
==> server: Preparing network interfaces based on configuration...
    server: Adapter 1: nat
    server: Adapter 2: hostonly
==> server: Forwarding ports...
    server: 22 (guest) => 60000 (host) (adapter 1)
==> server: Running 'pre-boot' VM customizations...
==> server: Booting VM...
==> server: Waiting for machine to boot. This may take a few minutes...
#############################중략######################################
  • 기본 정보 확인

      root@server:~# hostnamectl
       Static hostname: server
             Icon name: computer-vm
               Chassis: vm 🖴
            Machine ID: 47e27b291d4344f995b31c7bd9d79f92
               Boot ID: 48aba56a31cf4d90b8a573895da77ea2
        Virtualization: oracle
      Operating System: Ubuntu 24.04.3 LTS
                Kernel: Linux 6.8.0-86-generic
          Architecture: x86-64
       Hardware Vendor: innotek GmbH
        Hardware Model: VirtualBox
      Firmware Version: VirtualBox
         Firmware Date: Fri 2006-12-01
          Firmware Age: 19y 1month 1w 6d
    
      #통신확인
      root@server:~# for i in {1..3}; do ping -c 1 tnode$i; done
      PING tnode1 (10.10.1.11) 56(84) bytes of data.
      64 bytes from tnode1 (10.10.1.11): icmp_seq=1 ttl=64 time=1.12 ms
    
      --- tnode1 ping statistics ---
      1 packets transmitted, 1 received, 0% packet loss, time 0ms
      rtt min/avg/max/mdev = 1.116/1.116/1.116/0.000 ms
      PING tnode2 (10.10.1.12) 56(84) bytes of data.
      64 bytes from tnode2 (10.10.1.12): icmp_seq=1 ttl=64 time=2.13 ms
    
      --- tnode2 ping statistics ---
      1 packets transmitted, 1 received, 0% packet loss, time 0ms
      rtt min/avg/max/mdev = 2.134/2.134/2.134/0.000 ms
      PING tnode3 (10.10.1.13) 56(84) bytes of data.
      64 bytes from tnode3 (10.10.1.13): icmp_seq=1 ttl=64 time=1.52 ms
    
      --- tnode3 ping statistics ---
      1 packets transmitted, 1 received, 0% packet loss, time 0ms
      rtt min/avg/max/mdev = 1.519/1.519/1.519/0.000 ms

Ansible소개

  • 앤서블이란 ? Iac기반의 오픈소스 IT 자동화도구로서 코드기반으로 인프라관리를 할수있게끔 지원해주는 도구이다.

    • 특징은 별도의 에이전트를 설치하지않는 Agentless방식이며, 멱등성, 그리고 다양한모듈을 제공한다는 특징을 가지고 있다.
  • 앤서블 구성:

    *제어 노드 : 앤서블이 설치되는 노드이며, 운영체제가 리눅스면 제어 노드가 될 수 있다.

    *관리 노드 : 앤서블이 제어하는 원격 시스템 또는 호스트를 의미한다. 별도 에이전트가 없으므로, SSH통신이 가능해야하며, 파이썬이 설치되어있어야한다.

    *인벤토리 : 제어노드가 관리하는 관리노드의 목록을 나열해놓은 파일을 의미.

    *모듈: 앤서블은 관리 노드의 작업을 수행할 때 SSH를 통해 연결한 후 사용용도에 따른 모듈을 푸시하여 사용한다.

    *플레이북 : 플레이북은 관리 노드에서 수행할 작업들을 YAML 문법을 이용해 순서대로 작성해놓은 파일입니다.

Ansible 설치

  • ansible-server에 앤서블 설치진행

      root@server:~# apt install software-properties-common -y
      Reading package lists... Done
      Building dependency tree... Done
      Reading state information... Done
      software-properties-common is already the newest version (0.99.49.3).
      0 upgraded, 0 newly installed, 0 to remove and 89 not upgraded.
      root@server:~# add-apt-repository --yes --update ppa:ansible/ansible
      Repository: 'Types: deb
      URIs: https://ppa.launchpadcontent.net/ansible/ansible/ubuntu/
      Suites: noble
      Components: main
      '
      Description:
      Ansible is a radically simple IT automation platform that makes your applications and systems easier to deploy. Avoid writing scripts or custom code to deploy and update your applications— automate in a language that approaches plain English, using SSH, with no agents to install on remote systems.
    
      http://ansible.com/
    
      If you face any issues while installing Ansible PPA, file an issue here:
      https://github.com/ansible-community/ppa/issues
      More info: https://launchpad.net/~ansible/+archive/ubuntu/ansible
      Adding repository.
      Get:1 http://security.ubuntu.com/ubuntu noble-security InRelease [126 kB]
      Get:2 https://ppa.launchpadcontent.net/ansible/ansible/ubuntu noble InRelease [17.8 kB]
      Hit:3 http://us.archive.ubuntu.com/ubuntu noble InRelease
      Get:4 https://ppa.launchpadcontent.net/ansible/ansible/ubuntu noble/main amd64 Packages [772 B]
      Hit:5 http://us.archive.ubuntu.com/ubuntu noble-updates InRelease
      Get:6 https://ppa.launchpadcontent.net/ansible/ansible/ubuntu noble/main Translation-en [472 B]
      Hit:7 http://us.archive.ubuntu.com/ubuntu noble-backports InRelease
      Fetched 145 kB in 2s (69.9 kB/s)
      Reading package lists... Done
      root@server:~# apt install ansible -y
      Reading package lists... Done
      Building dependency tree... Done
      Reading state information... Done
      ##################중략################################
    
      root@server:~# which ansible
      /usr/bin/ansible
      root@server:~# mkdir my-ansible
      root@server:~# cd my-ansible/
      root@server:~/my-ansible#
  • 앤서블 접근을 위한 SSH인증구성

      root@server:~/my-ansible# ssh-keygen -t rsa -N "" -f /root/.ssh/id_rsa
      Generating public/private rsa key pair.
      Your identification has been saved in /root/.ssh/id_rsa
      Your public key has been saved in /root/.ssh/id_rsa.pub
      The key fingerprint is:
      SHA256:6c84qKYYcxZjFJPTMg03FVUF3/2kTXqKh77YBKp2qx8 root@server
      The key's randomart image is:
      +---[RSA 3072]----+
      |  +=o.oo..oo.    |
      |  =+o.     . . . |
      |  .+        . . +|
      | .       .     *.|
      |  +     S .   o +|
      | . o   . . . o o |
      |o o    .E   + o  |
      | *  . o.o= = .   |
      |. .o.oo++o+ +.   |
      +----[SHA256]-----+
      root@server:~/my-ansible# for i in {1..3}; do sshpass -p 'qwe123' ssh-copy-id -o StrictHostKeyChecking=no root@tnode$i; done
      root@server:~/my-ansible# for i in {1..3}; do echo ">> tnode$i <<"; ssh tnode$i hostname; echo; done
      >> tnode1 <<
      tnode1
      >> tnode2 <<
      tnode2
      >> tnode3 <<
      tnode3
    
      root@server:~/my-ansible# for i in {1..3}; do echo ">> tnode$i <<"; ssh tnode$i python3 -V; echo; done
      >> tnode1 <<
      Python 3.12.3
    
      >> tnode2 <<
      Python 3.12.3
    
      >> tnode3 <<
      Python 3.9.21
  • VS Code SSH연결

      # Read more about SSH config files: https://linux.die.net/man/5/ssh_config
      Host ansible-server
          HostName 10.10.1.10
          User root   
    

    image.png

    image.png

호스트 선정

  • 인벤토리를 이용한 자동화대상 관리 노드 설정

    인벤토리 파일은 텍스트 파일이며, 앤서블이 자동화 대상으로 하는 관리 노드지정합니다.

    이 파일은 INI ,YAML을 포함한 다양한 형식을 사용하여 작성할 수 있습니다

  • 다양한 인벤토리 파일 생성방법

    • 호스트 정의: 관리할 서버들을 IP 주소나 호스트명으로 나열하여 기본 목록을 생성합니다.
    • 그룹화: 대괄호([group])를 사용해 웹, DB 등 역할별로 묶어 효율적으로 관리할 수 있습니다.
    • 유연한 구성: 한 호스트가 여러 그룹에 속할 수 있어 위치, 환경(운영/개발) 등 다각도 분류가 가능합니다.
    • 중첩 그룹: :children 접미사로 기존 그룹들을 하위 그룹으로 포함하는 상위 그룹을 정의할 수 있습니다.
    • 범위 지정: [1:10]과 같은 문법을 사용해 유사한 이름의 호스트들을 한 줄로 간결하게 표현합니다.
  • 실습을 위한 인벤토리 그룹 구성

      root@server:~/my-ansible# cat <<EOT > inventory
      [web]
      tnode1
      tnode2
    
      [db]
      tnode3
    
      [all:children]
      web
      db
      EOT
      root@server:~/my-ansible# ansible-inventory -i ./inventory --list | jq
      {
        "_meta": {
          "hostvars": {},
          "profile": "inventory_legacy"
        },
        "all": {
          "children": [
            "ungrouped",
            "web",
            "db"
          ]
        },
        "db": {
          "hosts": [
            "tnode3"
          ]
        },
        "web": {
          "hosts": [
            "tnode1",
            "tnode2"
          ]
        }
      }
      root@server:~/my-ansible# 

    (참고사항) ansible.config의 적용 우선순위.

    1. ANSIBLE_CONFIG (environment variable if set)
    2. ansible.cfg (in the current directory)
    3. ~/.ansible.cfg (in the home directory)
    4. /etc/ansible/ansible.cfg

플레이북 작성

  • 앤서블 환경설정 하는이유

    • 구성 체계: ansible.cfg[defaults][privilege_escalation] 섹션으로 나뉘어 앤서블의 동작 방식을 결정합니다.
    • 기본 접속 설정: [defaults]에서는 관리 호스트의 인벤토리 경로, 접속 계정명, SSH 암호 확인 여부 등 핵심 연결 정보를 정의합니다.
    • 권한 상승 관리: [privilege_escalation]은 접속 후 sudo 등을 통해 관리자(root) 권한을 자동으로 얻는 방법과 절차를 설정합니다.
    • 효율성 향상: 설정 파일을 활용하면 실행 시마다 번거로운 옵션을 입력할 필요 없이 일관성 있는 자동화 환경을 구축할 수 있습니다.
  • 실습을 위한 앤서블 환경설정

      root@server:~/my-ansible# cat <<EOT > **ansible.cfg**
      **[defaults]**
      inventory = ./inventory
      remote_user = **root**
      ask_pass = false
    
      **[privilege_escalation]**
      become = true
      become_method = **sudo**
      become_user = root
      become_ask_pass = false
      EOT
  • 플레이북 작성하기

    • 플레이북 작성 및 문법체크 진행

      vi my-ansible/first-playbook.yml

    • hosts: all
      tasks:
      • name: Print message
        debug:
        msg: Hello CloudNet@ Ansible Study
root@server:~/my-ansible# ansible-playbook --syntax-check first-playbook.yml 

playbook: first-playbook.yml


root@server:~/my-ansible# ansible-playbook --syntax-check first-playbook-wth-error.yml 
[ERROR]: conflicting action statements: debug, msg
Origin: /root/my-ansible/first-playbook-wth-error.yml:4:7

2 - hosts: all
3   tasks:
4     - name: Print message
        ^ column 7
```

- 플레이북 실행하기

```bash
root@server:~/my-ansible# ansible-playbook first-playbook.yml 

PLAY [all] **************************************************************************************

TASK [Gathering Facts] **************************************************************************
ok: [tnode2]
ok: [tnode1]
ok: [tnode3]

TASK [Print message] ****************************************************************************
ok: [tnode1] => {
    "msg": "Hello CloudNet@ Ansible Study"
}
ok: [tnode2] => {
    "msg": "Hello CloudNet@ Ansible Study"
}
ok: [tnode3] => {
    "msg": "Hello CloudNet@ Ansible Study"
}

PLAY RECAP **************************************************************************************
tnode1                     : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
tnode2                     : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
tnode3                     : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0 
```

- state  멱등성을 보장하기위한 상태값.
    - **started** : `Start service httpd, if not started`
    - **stooped** : `Stop service httpd, if started`
    - **restarted** : `Restart service httpd, in all cases`
    - **reloaded** : `Reload service httpd, in all cases`
- ssh재기동 플레이북 수행

```bash
vi my-ansible/**restart-service.yml**
---
- hosts: all
  tasks:
    - name: Restart sshd service
      ansible.builtin.**service**:
        name: ssh # sshd
        state: **restarted**

root@server:~/my-ansible# ansible-playbook --check restart-service.yml

PLAY [all] **************************************************************************************

TASK [Gathering Facts] **************************************************************************
ok: [tnode3]
ok: [tnode1]
ok: [tnode2]

TASK [Restart sshd service] *********************************************************************
changed: [tnode1]
changed: [tnode2]
[ERROR]: Task failed: Module failed: Could not find the requested service ssh: host
Origin: /root/my-ansible/restart-service.yml:4:7

2 - hosts: all
3   tasks:
4     - name: Restart sshd service
        ^ column 7

fatal: [tnode3]: FAILED! => {"changed": false, "msg": "Could not find the requested service ssh: host"}

PLAY RECAP **************************************************************************************
tnode1                     : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
tnode2                     : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
tnode3                     : ok=1    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0 

 #다만, Redhat에서는 ssh가아닌 sshd이므로 정상수행이안된다.
```

변수

  • 변수 우선 순위 : 추가변수(실행 시 파라미터) > 플레이 변수 > 호스트 변수 > 그룹 변수

  • 그룹 변수 실습

      #인벤토리 파일 하단에 선언된 변수를 의미하며 인벤토리에서 선언후 플레이북에서 활용한다.
    
      root@server:~/my-ansible# cat inventory 
      [web]
      tnode1
      tnode2
    
      [db]
      tnode3
    
      [all:children]
      web
      db
    
      **[all:vars]
      user=ansible
      ##
      root@server:~/my-ansible# cat create-user.yml 
      ---
    
      - hosts: all
        tasks:
        - name: Create User {{ user }}
          ansible.builtin.user:
            name: "{{ user }}"
            state: present
    
    
root@server:~/my-ansible# ansible-playbook create-user.yml

PLAY [all] **************************************************************************************

TASK [Gathering Facts] **************************************************************************
ok: [tnode2]
ok: [tnode3]
ok: [tnode1]

TASK [Create User ansible] **********************************************************************
changed: [tnode1]
changed: [tnode2]
changed: [tnode3]

PLAY RECAP **************************************************************************************
tnode1                     : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
tnode2                     : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
tnode3                     : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

##멱등성 확인
TASK [Create User ansible] **********************************************************************
ok: [tnode1]
ok: [tnode2]
ok: [tnode3]**   
```
  • 호스트 변수 실습

      #말그대로 호스트에서만 사용이 가능하다.
    
      root@server:~/my-ansible# vi inventory 
    
      [web]
      tnode1 ansible_python_interpreter=/usr/bin/python3
      tnode2 ansible_python_interpreter=/usr/bin/python3
    
      [db]
      tnode3 ansible_python_interpreter=/usr/bin/python3 **user=ansible1**
    
      [all:children]
      web
      db
    
      [all:vars]
      user=ansible
    
      root@server:~/my-ansible# for i in {1..3}; do echo ">> tnode$i <<"; ssh tnode$i tail -n 3 /etc/passwd; echo; done
      >> tnode1 <<
      vagrant:x:1000:1000:vagrant:/home/vagrant:/bin/bash
      vboxadd:x:999:1::/var/run/vboxadd:/bin/false
      ansible:x:1001:1001::/home/ansible:/bin/sh
    
      >> tnode2 <<
      vagrant:x:1000:1000:vagrant:/home/vagrant:/bin/bash
      vboxadd:x:999:1::/var/run/vboxadd:/bin/false
      ansible:x:1001:1001::/home/ansible:/bin/sh
    
      >> tnode3 <<
      vboxadd:x:991:1::/var/run/vboxadd:/bin/false
      ansible:x:1001:1001::/home/ansible:/bin/bash
      ansible1:x:1002:1002::/home/ansible1:/bin/bash
  • 플레이 변수

      #플레이북에서 선언하는 변수를 의미
      root@server:~/my-ansible# cat create-user2.yml 
      ---
    
      - hosts: all
        **vars:
          user: ansible2**
    
        tasks:
        - name: Create User {{ user }}
          ansible.builtin.user:
            name: "{{ user }}"
            state: present
    
      root@server:~/my-ansible# for i in {1..3}; do echo ">> tnode$i <<"; ssh tnode$i tail -n 3 /etc/passwd; echo; done
      >> tnode1 <<
      vboxadd:x:999:1::/var/run/vboxadd:/bin/false
      ansible:x:1001:1001::/home/ansible:/bin/sh
      **ansible2:x:1002:1002::/home/ansible2:/bin/sh**
    
      >> tnode2 <<
      vboxadd:x:999:1::/var/run/vboxadd:/bin/false
      ansible:x:1001:1001::/home/ansible:/bin/sh
      **ansible2:x:1002:1002::/home/ansible2:/bin/sh**
    
      >> tnode3 <<
      ansible:x:1001:1001::/home/ansible:/bin/bash
      ansible1:x:1002:1002::/home/ansible1:/bin/bash
      **ansible2:x:1003:1003::/home/ansible2:/bin/bash**
    
      #아래와 같은 형태로 vars를 별도관리해서 사용가능
      root@server:~/my-ansible# tree vars
      vars
      └── users.yml
    
  • 추가 변수

      #직접 명령어 수행시 입력하는 변수를 의미한다.
    
      root@server:~/my-ansible# ansible-playbook **-e user=ansible4** create-user3.yml
    
      root@server:~/my-ansible# for i in {1..3}; do echo ">> tnode$i <<"; ssh tnode$i tail -n 5 /etc/passwd; echo; done
      >> tnode1 <<
      vboxadd:x:999:1::/var/run/vboxadd:/bin/false
      ansible:x:1001:1001::/home/ansible:/bin/sh
      ansible2:x:1002:1002::/home/ansible2:/bin/sh
      ansible3:x:1003:1003::/home/ansible3:/bin/sh
      ansible4:x:1004:1004::/home/ansible4:/bin/sh
    
      >> tnode2 <<
      vboxadd:x:999:1::/var/run/vboxadd:/bin/false
      ansible:x:1001:1001::/home/ansible:/bin/sh
      ansible2:x:1002:1002::/home/ansible2:/bin/sh
      ansible3:x:1003:1003::/home/ansible3:/bin/sh
      ansible4:x:1004:1004::/home/ansible4:/bin/sh
    
      >> tnode3 <<
      ansible:x:1001:1001::/home/ansible:/bin/bash
      ansible1:x:1002:1002::/home/ansible1:/bin/bash
      ansible2:x:1003:1003::/home/ansible2:/bin/bash
      ansible3:x:1004:1004::/home/ansible3:/bin/bash
      ansible4:x:1005:1005::/home/ansible4:/bin/bash
  • 작업변수 ansible.builtin.debug모듈

      #플레이북의 수행결과를 저장한다 register: result문구를 통해확인
    
      root@server:~/my-ansible# ansible-playbook -e user=test create-user.yml 
    
      PLAY [db] ***************************************************************************************
    
      TASK [Gathering Facts] **************************************************************************
      ok: [tnode3]
    
      TASK [Create User test] *************************************************************************
      changed: [tnode3]
    
      TASK [ansible.builtin.debug] ********************************************************************
      ok: [tnode3] => {
          "result": {
              "changed": true,
              "comment": "",
              "create_home": true,
              "failed": false,
              "group": 1006,
              "home": "/home/test",
              "name": "test",
              "shell": "/bin/bash",
              "state": "present",
              "system": false,
              "uid": 1006
          }
      }
    

Facts

앤서블이 관리 노드에서 수집하는 변수를의미하며 변수종류에는 아래와 같은 종류가 수집된다.

  • 호스트 이름

  • 커널 버전

  • 네트워크 인터페이스 이름

  • 운영체제 버전

  • CPU 개수

  • 사용 가능한 메모리

  • 스토리지 장치의 크기 및 여유 공간

  • 등등…

  • 팩트 사용하기

      root@server:~/my-ansible# ansible-playbook facts.yml 
    
      PLAY [db] ***************************************************************************************
    
      TASK [Gathering Facts] **************************************************************************
      ok: [tnode3]
    
      TASK [Print all facts] **************************************************************************
      ok: [tnode3] => {
          "ansible_facts": {
              "all_ipv4_addresses": [
                  "10.0.2.15",
                  "10.10.1.13"
              ],
              "all_ipv6_addresses": [
                  "fd17:625c:f037:2:a00:27ff:fe6f:16b4",
                  "fe80::a00:27ff:fe6f:16b4",
                  "fe80::a00:27ff:feba:8927"
              ],
              "ansible_local": {},
              "apparmor": {
                  "status": "disabled"
              },
              "architecture": "x86_64",
              "bios_date": "12/01/2006",
              "bios_vendor": "innotek GmbH",
              "bios_version": "VirtualBox",
              "board_asset_tag": "NA",
              "board_name": "VirtualBox",
              "board_serial": "0",
              "board_vendor": "Oracle Corporation",
              "board_version": "1.2",
              "chassis_asset_tag": "NA",
              "chassis_serial": "NA",
              "chassis_vendor": "Oracle Corporation",
              "chassis_version": "NA",
    
       ######################중략#################################################
    • 특정 정보만 수집하기

        ---
      
        - hosts: **db**
      
          tasks:
          - name: **Print all facts**
            ansible.builtin.**debug**:
              msg: >
                The default IPv4 address of **{{ ansible_facts.hostname }}**
                is **{{ ansible_facts.default_ipv4.address }}
      
        root@server:~/my-ansible# ansible-playbook facts1.yml 
      
        PLAY [db] ***************************************************************************************
      
        TASK [Gathering Facts] **************************************************************************
        ok: [tnode3]
      
        TASK [Print all facts] **************************************************************************
        ok: [tnode3] => {
            "msg": "The default IPv4 address of tnode3 is 10.0.2.15\n"
        }
      
        PLAY RECAP **************************************************************************************
        tnode3                     : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescu**
    • 변수로 사용할 수 있는 앤서블팩트(구표기법은제외)

    | 팩트 | ansible_facts.* 표기법 |
    | --- | --- |
    | 호스트명 | ansible_facts.**hostname** |
    | 도메인 기반 호스트명 | ansible_facts.**fqdn** |
    | 기본 IPv4 주소 | ansible_facts.**default_ipv4.address** |
    | 네트워크 인터페이스 이름 목록 | ansible_facts.**interfaces** |
    | 디스크 파티션 크기 | ansible_facts.**device.vda.partitions.vda1.size** |
    | DNS 서버 | ansible_facts.**dns.nameservers** |
    | 현재 실행 커널 버전 | ansible_facts.**kernel** |
    | 운영체제 종류 | ansible_facts.**distribution** |
  • 팩트 수집 끄기

      ---
    
      - hosts: db
        **gather_facts: no**
    
        tasks:
        **- name: Print message
          debug:
            msg: Hello Ansible World
    
    
root@server:~/my-ansible# ansible-playbook facts3.yml 

PLAY [db] ***************************************************************************************
############Fact수집없음#######################
TASK [Print message] ****************************************************************************
ok: [tnode3] => {
    "msg": "Hello Ansible World"
}

PLAY RECAP **************************************************************************************
tnode3                     : ok=1    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0**   

```
  • 팩트 캐싱

      #ansible.cfg에 아래와 같이 적용 영구저장을 위해 File , DB저장도 가능
      [defaults]
      inventory = ./inventory
      remote_user = root
      ask_pass = false
      **gathering = smart
      fact_caching = jsonfile
      fact_caching_connection = myfacts**
    
      [privilege_escalation]
      become = true
      become_method = sudo
      become_user = root
      become_ask_pass = false
    
      #- DEFAULT_GATHERING(수집 정책 3가지) : implicit(기본값), explicit,smart 
      #- fact_caching_connection : 연결 정의 or **캐싱 경로**
  • 도전과제

      ---
      - hosts: all
        gather_facts: yes
    
        tasks:
          - name: Print kernel and distribution
            ansible.builtin.debug:
              msg:
                - "Kernel: {{ ansible_facts.kernel }}"
                - "Distribution: {{ ansible_facts.distribution }}"
    
      TASK [Print kernel and distribution] ************************************************************
      ok: [tnode1] => {
          "msg": [
              "Kernel: 6.8.0-86-generic",
              "Distribution: Ubuntu"
          ]
      }
      ok: [tnode2] => {
          "msg": [
              "Kernel: 6.8.0-86-generic",
              "Distribution: Ubuntu"
          ]
      }
      ok: [tnode3] => {
          "msg": [
              "Kernel: 5.14.0-570.52.1.el9_6.x86_64",
              "Distribution: Rocky"
          ]
      }

반복문

  • 단순반복문

      ---
      - hosts: **all**
        tasks:
        - name: Check sshd and rsyslog state
          **ansible.builtin.service:
            name: "{{ item }}"
            state: started
          loop:
            - vboxadd-service**  # ssh
            **- rsyslog
    
      root@server:~/my-ansible# ansible-playbook loop.yml 
    
      PLAY [all] **************************************************************************************
    
      TASK [Gathering Facts] **************************************************************************
      ok: [tnode3]
      ok: [tnode2]
      ok: [tnode1]
    
      TASK [Check sshd and rsyslog state] *************************************************************
      ok: [tnode1] => (item=vboxadd-service)
      ok: [tnode2] => (item=vboxadd-service)
      ok: [tnode3] => (item=vboxadd-service)
      ok: [tnode1] => (item=rsyslog)
      ok: [tnode2] => (item=rsyslog)
      ok: [tnode3] => (item=rsyslog)
    
      PLAY RECAP **************************************************************************************
      tnode1                     : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
      tnode2                     : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
      tnode3                     : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0**   
    
    
    
```
  • 사전 목록에 의한 반복문

    ** builtin.file모듈 파일모듈도 상당히 많이쓰는 모듈중하나로 파일 및 파일 속성 관리 모듈이다

      ---
      - hosts: **all**
    
        tasks:
          - name: Create files
            ansible.builtin.**file**:
              path: "**{{ item['log-path'] }}**"
              mode: "**{{ item['log-mode'] }}**"
              state: touch
            **loop**:
              - log-path: /var/log/test1.log
                log-mode: '0644'
              - log-path: /var/log/test2.log
                log-mode: '0600'
    
      root@server:~/my-ansible# ansible-playbook make-file.yml 
    
      PLAY [all] **************************************************************************************
    
      TASK [Gathering Facts] **************************************************************************
      ok: [tnode3]
      ok: [tnode2]
      ok: [tnode1]
    
      TASK [Create files] *****************************************************************************
      changed: [tnode1] => (item={'log-path': '/var/log/test1.log', 'log-mode': '0644'})
      changed: [tnode2] => (item={'log-path': '/var/log/test1.log', 'log-mode': '0644'})
      changed: [tnode3] => (item={'log-path': '/var/log/test1.log', 'log-mode': '0644'})
      changed: [tnode1] => (item={'log-path': '/var/log/test2.log', 'log-mode': '0600'})
      changed: [tnode2] => (item={'log-path': '/var/log/test2.log', 'log-mode': '0600'})
      changed: [tnode3] => (item={'log-path': '/var/log/test2.log', 'log-mode': '0600'})
    
      PLAY RECAP **************************************************************************************
      tnode1                     : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
      tnode2                     : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
      tnode3                     : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
  • 반복문 과 Register변수 사용

    Register변수를 사용해 작업의 출력을 캡쳐

      ---
      - hosts: localhost
        tasks:
          - name: Loop echo test
            ansible.builtin.**shell**: "echo 'I can speak {{ item }}'"
            **loop**:
              - Korean
              - English
            **register**: **result**
    
          - name: Show result
            ansible.builtin.**debug**:
              **var: result**
    
      root@server:~/my-ansible# ansible-playbook loop_register.yml 
    
      PLAY [localhost] ********************************************************************************
    
      TASK [Gathering Facts] **************************************************************************
      ok: [localhost]
    
      TASK [Loop echo test] ***************************************************************************
      changed: [localhost] => (item=Korean)
      changed: [localhost] => (item=English)
    
      TASK [Show result] ******************************************************************************
      ok: [localhost] => {
          "result": {
              "changed": true,
              "msg": "All items completed",
              "results": [
                  {
                      "ansible_loop_var": "item",
                      "changed": true,
                      "cmd": "echo 'I can speak Korean'",
                      "delta": "0:00:00.005160",
                      "end": "2026-01-14 23:37:54.886917",
                      "failed": false,
                      "invocation": {
                          "module_args": {
                              "_raw_params": "echo 'I can speak Korean'",
      ############################중략#########################################
                      ]
                  }
              ],
              "skipped": false
          }
      }
    
      PLAY RECAP **************************************************************************************
      localhost                  : ok=3    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0  
      ---
      - hosts: **localhost**
        tasks:
          - name: Loop echo test
            ansible.builtin.**shell**: "echo 'I can speak {{ item }}'"
            **loop**:
              - Korean
              - English
            **register**: result
    
          - name: Show result
            ansible.builtin.**debug**:
              **msg**: "Stdout: {{ item.**stdout** }}"
            **loop**: "{{ result.results }}"
    
      TASK [Show result] ******************************************************************************
      ok: [localhost] => (item={'changed': True, 'stdout': 'I can speak Korean', 'stderr': '', 'rc': 0, 'cmd': "echo 'I can speak Korean'", 'start': '2026-01-14 23:40:03.964047', 'end': '2026-01-14 23:40:03.969023', 'delta': '0:00:00.004976', 'msg': '', 'invocation': {'module_args': {'_raw_params': "echo 'I can speak Korean'", '_uses_shell': True, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'cmd': None, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['I can speak Korean'], 'stderr_lines': [], 'failed': False, 'item': 'Korean', 'ansible_loop_var': 'item'}) => {
          "msg": "Stdout: I can speak Korean"
      }
      ok: [localhost] => (item={'changed': True, 'stdout': 'I can speak English', 'stderr': '', 'rc': 0, 'cmd': "echo 'I can speak English'", 'start': '2026-01-14 23:40:04.174039', 'end': '2026-01-14 23:40:04.179185', 'delta': '0:00:00.005146', 'msg': '', 'invocation': {'module_args': {'_raw_params': "echo 'I can speak English'", '_uses_shell': True, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'cmd': None, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['I can speak English'], 'stderr_lines': [], 'failed': False, 'item': 'English', 'ansible_loop_var': 'item'}) => {
          "msg": "Stdout: I can speak English"
      }
      |
    • rc: ‘return code’ 반환코드

      반환 코드 의미
      0 성공
      1 일반 오류
      2 잘못된 인자
      126 실행 권한 없음
      127 명령 없음
      130 Ctrl+C 종료
      137 SIGKILL
      139 Segmentation fault

조건문

  • 조건작업 구문

    when문은 조건부로 작업을 실행할때 조건으로 값을 사용한다.

      ---
      - hosts: **localhost**
        vars:
          run_my_task: **true**
    
        tasks:
        - name: echo message
          ansible.builtin.**shell**: "echo test"
          **when: run_my_task**
          register: result
    
        - name: Show result
          ansible.builtin.**debug**:
            var: result
    
       root@server:~/my-ansible# ansible-playbook when_task.yml 
    
      PLAY [localhost] ********************************************************************************
    
      TASK [Gathering Facts] **************************************************************************
      ok: [localhost]
    
      **TASK [echo message] *****************************************************************************
      changed: [localhost]**
    
      TASK [Show result] ******************************************************************************
      ok: [localhost] => {
          "result": {
              "changed": true,
              "cmd": "echo test",
              "delta": "0:00:00.004151",
              "end": "2026-01-14 23:49:23.953038",
              "failed": false,
              "msg": "",
              "rc": 0,
              "start": "2026-01-14 23:49:23.948887",
              "stderr": "",
              "stderr_lines": [],
              "stdout": "test",
              "stdout_lines": [
                  "test"
              ]
          }
      }
    
      PLAY RECAP **************************************************************************************
      localhost                  : ok=3    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
      #false로 수행시 echo message는 건너뜀
    
      TASK [echo message] *****************************************************************************
      skipping: [localhost]
    
  • 조건연산자

    when절에 bool변수(true,false)외에 조건연산자도 사용이 가능하다.

    연산 예시 설명
    ansible_facts[’machine’] == “x86_64” ansible_facts[’machine’] 값이 x86_64와 같으면 true
    max_memory == 512 max_memory 값이 512와 같다면 true
    min_memory < 128 min_memory 값이 128보다 작으면 true
    min_memory > 256 min_memory 값이 256보다 크면 true
    min_memory <= 256 min_memory 값이 256보다 작거나 같으면 true
    min_memory >= 512 min_memory 값이 512보다 크거나 같으면 true
    min_memory != 512 min_memory 값이 512와 같지 않으면 true
    min_memory is defined min_memory 라는 변수가 있으면 true
    min_memory is not defined min_memory 라는 변수가 없으면 true
    memory_available memory 값이 true이며 true, 이때 해당 값이 1이거나 True 또는 yes면 true
    not memory_available memory 값이 false이며 true, 이때 해당 값이 0이거나 False 또는 no면 true
    ansible_facts[’distribution’] in supported_distros ansible_facts[’distribution’]의 값이 supported_distros 라는 변수에 있으면 true
      ---
      - hosts: all
        **vars**:
          supported_distros:
            - Ubuntu
            - CentOS
    
        tasks:
          - name: Print supported os
            ansible.builtin.debug:
              msg: "This {{ ansible_facts['distribution'] }} need to use **apt**"
            when: ansible_facts['distribution'] **in** supported_distros
    
    
TASK [print supported os] ***********************************************************************
ok: [tnode1] => {
    "msg": "This Ubuntu need to use apt"
}
ok: [tnode2] => {
    "msg": "This Ubuntu need to use apt"
}
```
  • 복수조건문

    when은 단일조건외에 복수도 사용가능하다.

      ---
      - hosts: all
    
        tasks:
          - name: Print os type
            ansible.builtin.debug:
              msg: >-
                   OS Type: {{ ansible_facts['distribution'] }}
                   OS Version: {{ ansible_facts['distribution_version'] }}
            when: > 
                ( ansible_facts['distribution'] == "Rocky" and
                  ansible_facts['distribution_version'] == "9.6" )
                or
                ( ansible_facts['distribution'] == "Ubuntu" and
                  ansible_facts['distribution_version'] == "24.04" )
    
      TASK [Print os type] ****************************************************************************
      ok: [tnode1] => {
          "msg": "OS Type: Ubuntu OS Version: 24.04"
      }
      ok: [tnode2] => {
          "msg": "OS Type: Ubuntu OS Version: 24.04"
      }
      ok: [tnode3] => {
          "msg": "OS Type: Rocky OS Version: 9.6"
      }
      root@server:~/my-ansible# ansible tnode1 -m ansible.builtin.setup | grep -iE 'os_family|ansible_distribution|fqdn'
              "ansible_distribution": "Ubuntu",
              "ansible_distribution_file_parsed": true,
              "ansible_distribution_file_path": "/etc/os-release",
              "ansible_distribution_file_variety": "Debian",
              "ansible_distribution_major_version": "24",
              "ansible_distribution_release": "noble",
              "ansible_distribution_version": "24.04",
              "ansible_fqdn": "tnode1",
              "ansible_os_family": "Debian",

핸들러 및 작업 실패 처리

  • 앤서블 핸들러

    • 앤서블에서 핸들러를 사용할때는 notify문을 사용하여 명시적으로 호출된 경우에만 사용이 가능하다.


    • hosts: tnode2
      tasks:

      • name: restart rsyslog
        ansible.builtin.service:
        name: "rsyslog"
        state: restarted
        notify:
        • print msg

      handlers:

      • name: print msg
        ansible.builtin.debug:
        msg: "rsyslog is restarted"
RUNNING HANDLER [print msg] *********************************************************************
ok: [tnode2] => {
    "msg": "rsyslog is restarted"
}

```
  • 작업실패무시

    • 앤서블은 플레이 시 각 작업의 반환 코드를 평가하여 작업의 성공 여부를 판단합니다. 일반적으로 작업이 실패하면 앤서블은 이후의 모든 작업을 건너뜁니다.

    • 하지만 작업이 실패해도 플레이를 계속 실행할 수 있습니다. 이는 ignore_errors라는 키워드로 구현할 수 있습니다.

        #아래와 같이 Print msg 태스크는 수행이 되지않는다.
        ---
        - hosts : **tnode1**
      
          tasks:
            - name: Install apache3
              ansible.builtin.**apt**:
                name: **apache3**
                state: latest
      
            - name: Print msg
              ansible.builtin.**debug**:
                msg: "Before task is ignored"
      
      
    root@server:~/my-ansible# ansible-playbook apache.yml 

    PLAY [tnode1] ***********************************************************************************

    TASK [Gathering Facts] **************************************************************************
    ok: [tnode1]

    TASK [Install apache3] **************************************************************************
    [ERROR]: Task failed: Module failed: No package matching 'apache3' is available
    Origin: /root/my-ansible/apache.yml:5:7

    3
    4   tasks:
    5     - name: Install apache3
            ^ column 7

    fatal: [tnode1]: FAILED! => {"changed": false, "msg": "No package matching 'apache3' is available"}

    PLAY RECAP **************************************************************************************
    tnode1                     : ok=1    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0
    ```

    ```bash
    # ignore_errors: yes 변수를 넣어주면 실패시에도 Print msg를 수행한다.
    ---
    - hosts : **tnode1**

      tasks:
        - name: Install apache3
          ansible.builtin.**apt**:
            name: **apache3**
            state: latest
          **ignore_errors: yes**

        - name: Print msg
          ansible.builtin.**debug**:
            msg: "Before task is ignored"

    root@server:~/my-ansible# ansible-playbook apache.yml 

    PLAY [tnode1] ***********************************************************************************

    TASK [Gathering Facts] **************************************************************************
    ok: [tnode1]

    TASK [Install apache3] **************************************************************************
    [ERROR]: Task failed: Module failed: No package matching 'apache3' is available
    Origin: /root/my-ansible/apache.yml:5:7

    3
    4   tasks:
    5     - name: Install apache3
            ^ column 7

    fatal: [tnode1]: FAILED! => {"changed": false, "msg": "No package matching 'apache3' is available"}
    ...ignoring

    TASK [Print msg] ********************************************************************************
    ok: [tnode1] => {
        "msg": "Before task is ignored"
    }
    ```
  • 작업 실패 조건지정

    • shell과 같은 커맨드형 스크립트 사용시 멱등성 보장이 힘들다. 셸스크립트보다는 모듈을 사용하도록한다.
  • 앤서블 블록 및 오류처리

    • block (작업 그룹화): 실행할 기본 작업들을 논리적으로 하나로 묶어 관리하는 단위

    • rescue (오류 복구): block 내 작업이 실패했을 때만 실행되는 예외 처리 및 복구 단계

    • always (강제 실행): 작업의 성공이나 실패 여부와 상관없이 마지막에 반드시 수행되는 마무리 단계


    • hosts: tnode2
      vars:
      logdir: /var/log/daily_log
      logfile: todays.log

      tasks:

      • name: Configure Log Env
        block:

        • name: Find Directory
          ansible.builtin.find:
          paths: "{{ logdir }}"
          register: result
          failed_when: "'Not all paths' in result.msg"

        rescue:

        • name: Make Directory when Not found Directory
          ansible.builtin.file:
          path: "{{ logdir }}"
          state: directory
          mode: '0755'

        always:

        • name: Create File
          ansible.builtin.file:
          path: "{{ logdir }}/{{ logfile }}"
          state: touch
          mode: '0644'

      #첫수행
      [ERROR]: Task failed: Action failed: Not all paths examined, check warnings for details
      Origin: /root/my-ansible/block.yml:10:11

      8 - name: Configure Log Env
      9 block:
      10 - name: Find Directory

             ^ column 11

      fatal: [tnode2]: FAILED! => {"changed": false, "examined": 0, "failed_when_result": true, "files": [], "matched": 0, "msg": "Not all paths examined, check warnings for details", "skipped_paths": {"/var/log/daily_log": "'/var/log/daily_log' is not a directory"}}

      #다시수행
      root@server:~/my-ansible# ansible-playbook block.yml

      PLAY [tnode2] ***

      TASK [Gathering Facts] **
      ok: [tnode2]

      TASK [Find Directory] ***
      ok: [tnode2]

      TASK [Create File] **
      changed: [tnode2]

      PLAY RECAP **
      tnode2 : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

      
      

롤 구조 소개 및 사용법

  • 재사용 및 모듈화: 플레이북 내용을 기능 단위로 쪼개어 부품처럼 관리하며, 언제든 코드를 재사용할 수 있는 구조

  • 효율적인 관리와 협업: 변수 설정 및 대규모 프로젝트 관리가 용이해져 여러 개발자가 동시에 작업유용

  • 생태계 활용: 잘 만든 롤은 앤서블 갤럭시(Ansible Galaxy)를 통해 전 세계 사용자와 공유하거나 가져다 쓸 수 있다

  • 롤생성

      root@server:~/my-ansible# ansible-galaxy role -h
      usage: ansible-galaxy role [-h] ROLE_ACTION ...
    
      positional arguments:
        ROLE_ACTION
          init       Initialize new role with the base structure of a role.
          remove     Delete roles from roles_path.
          delete     Removes the role from Galaxy. It does not remove or alter the actual GitHub
                     repository.
          list       Show the name and version of each role installed in the roles_path.
          search     Search the Galaxy database by tags, platforms, author and multiple keywords.
          import     Import a role into a galaxy server
          setup      Manage the integration between Galaxy and the given source.
          info       View more details about a specific role.
          install    Install role(s) from file(s), URL(s) or Ansible Galaxy
    
      options:
        -h, --help   show this help message and exit
    
       root@server:~/my-ansible# tree ./my-role/
      ./my-role/
      ├── defaults
      │   └── main.yml
      ├── files
      ├── handlers
      │   └── main.yml
      ├── meta
      │   └── main.yml
      ├── README.md
      ├── tasks
      │   └── main.yml
      ├── templates
      ├── tests
      │   ├── inventory
      │   └── test.yml
      └── vars
          └── main.yml
    
      9 directories, 8 files
  • 플레이북 개발

      #메인태스크 작성
      ~/my-ansible/**my-role/tasks/main.yml
    
      ---
      # tasks file for my-role
    
      - name: install service {{ service_title }}
        ansible.builtin.apt:
          name: "{{ item }}"
          state: latest
        loop: "{{ httpd_packages }}"
        when: ansible_facts.distribution in supported_distros
    
      - name: copy conf file
        ansible.builtin.copy:
          src: "{{ src_file_path }}"
          dest: "{{ dest_file_path }}"
        notify: 
          - restart service
      # index.html 생성**
      ~/my-ansible/**my-role/files/index.html
      root@server:~/my-ansible/my-role# echo "Study Let's go" > files/index.html
    
      # 핸들러 작성**
      ~/my-ansible/**my-role/handlers/main.yml**
      ---
      # handlers file for my-role
    
      - name: restart service
        ansible.builtin.**service**:
          name: "{{ service_name }}"
          state: **restarted
    
      #default(가변변수작성)**
      ~/my-ansible/**my-role/defaults/main.yml
    
      root@server:~/my-ansible/my-role# echo 'service_title: "Apache Web Server"' >> defaults/main.yml
    
      #vars(불변변수)
      ~/**my-ansible/**my-role/vars/main.yml**
    
  • 플레이북에 롤추가하기

      ~/my-ansible/**role-example.yml**
    
      ---
      - hosts: **tnode1**
    
        **tasks**:
          - name: Print start play
            ansible.builtin.**debug**:
              msg: "Let's start role play"
    
          - name: Install Service by role
            ansible.builtin.**import_role**:
              name: **my-role
    
    
##실행
root@server:~/my-ansible# ansible-playbook role-example.yml 

PLAY [tnode1] ***********************************************************************************

TASK [Gathering Facts] **************************************************************************
ok: [tnode1]

TASK [Print start play] *************************************************************************
ok: [tnode1] => {
    "msg": "Let's start role play"
}

TASK [my-role : install service Apache Web Server] **********************************************
changed: [tnode1] => (item=apache2)
changed: [tnode1] => (item=apache2-doc)

TASK [my-role : copy conf file] *****************************************************************
changed: [tnode1]

RUNNING HANDLER [my-role : restart service] *****************************************************
changed: [tnode1]

PLAY RECAP **************************************************************************************
tnode1                     : ok=5    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0**   

```

앤서블 갤럭시

  • 공식 공유 허브: 전 세계 사용자가 제작한 롤(Role)컬렉션(Collection)을 자유롭게 공유하고 찾는 저장소
  • 빠른 구축: ansible-galaxy 명령어로 검증된 코드를 즉시 내려받아 자동화 시스템 구축 시간을 획기적으로 단축가능
  • 표준화된 구조: init 기능을 통해 앤서블이 권장하는 표준 디렉터리 구조를 자동으로 생성하여 관리가 편리하다.
  • 커뮤니티 검증: 사용자 평점과 다운로드 수를 통해 신뢰할 수 있는 고품질의 자동화 코드를 선택하기 용이하다.
  • 버전 및 의존성 관리: 특정 버전의 코드를 지정해 설치할 수 있어 프로젝트의 일관성과 안정성 보장

'Study > K8S-Deploy' 카테고리의 다른 글

K8S)3주차 과제  (0) 2026.01.24
K8S) 1주차 과제  (0) 2026.01.08

1. Bootstrap Kubernetes the hard way

00. Kind K8s 설치

  • WSL2를 이용해서 진행(기설치 된kind를 통해서 실습을 진행한다.)

      (⎈|N/A:N/A) zosys@4:~$ kind version
      kind v0.30.0 go1.24.6 linux/amd64
      (⎈|N/A:N/A) zosys@4:~$ helm version
      version.BuildInfo{Version:"v3.19.0", GitCommit:"3d8990f0836691f0229297773f3524598f46bda6", GitTreeState:"clean", GoVersion:"go1.24.7"}
    
  • 1주차 실습을 위한 kind k8s 배포(WSL2)

      (⎈|N/A:N/A) zosys@4:~$ kind create cluster --name myk8s --image kindest/node:v1.32.8 --config - <<EOF
      kind: Cluster
      apiVersion: kind.x-k8s.io/v1alpha4
      nodes:
      - role: control-plane
        extraPortMappings:
        - containerPort: 30000
          hostPort: 30000
        - containerPort: 30001
          hostPort: 30001
      - role: worker
      EOF
      Creating cluster "myk8s" ...
       ✓ Ensuring node image (kindest/node:v1.32.8) 🖼
       ✓ Preparing nodes 📦 📦
       ✓ Writing configuration 📜
       ✓ Starting control-plane 🕹️
       ✓ Installing CNI 🔌
       ✓ Installing StorageClass 💾
       ✓ Joining worker nodes 🚜
      Set kubectl context to "kind-myk8s"
      You can now use your cluster with:
    
      (⎈|kind-myk8s:N/A) zosys@4:~$ kind get nodes --name myk8s
      myk8s-control-plane
      myk8s-worker
    
      (⎈|kind-myk8s:default) zosys@4:~$ kubectl get node -o wide
      NAME                  STATUS   ROLES           AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION                     CONTAINER-RUNTIME
      myk8s-control-plane   Ready    control-plane   2m30s   v1.32.8   172.18.0.3    <none>        Debian GNU/Linux 12 (bookworm)   6.6.87.2-microsoft-standard-WSL2   containerd://2.1.3
      myk8s-worker          Ready    <none>          2m14s   v1.32.8   172.18.0.2    <none>        Debian GNU/Linux 12 (bookworm)   6.6.87.2-microsoft-standard-WSL2   containerd://2.1.3
      (⎈|kind-myk8s:default) zosys@4:~$ docker exec -it myk8s-control-plane ss -tnlp
      State   Recv-Q  Send-Q   Local Address:Port    Peer Address:Port Process
      LISTEN  0       4096         127.0.0.1:45859        0.0.0.0:*     users:(("containerd",pid=106,fd=11))
      LISTEN  0       4096        172.18.0.3:2379         0.0.0.0:*     users:(("etcd",pid=645,fd=9))
      LISTEN  0       4096        172.18.0.3:2380         0.0.0.0:*     users:(("etcd",pid=645,fd=7))
      LISTEN  0       4096        127.0.0.11:46007        0.0.0.0:*
      LISTEN  0       4096         127.0.0.1:10248        0.0.0.0:*     users:(("kubelet",pid=706,fd=20))
      LISTEN  0       4096         127.0.0.1:10249        0.0.0.0:*     users:(("kube-proxy",pid=899,fd=17))
      LISTEN  0       4096         127.0.0.1:10259        0.0.0.0:*     users:(("kube-scheduler",pid=514,fd=3))
      LISTEN  0       4096         127.0.0.1:10257        0.0.0.0:*     users:(("kube-controller",pid=554,fd=3))
      LISTEN  0       4096         127.0.0.1:2379         0.0.0.0:*     users:(("etcd",pid=645,fd=8))
      LISTEN  0       4096         127.0.0.1:2381         0.0.0.0:*     users:(("etcd",pid=645,fd=16))
      LISTEN  0       4096                 *:6443               *:*     users:(("kube-apiserver",pid=566,fd=3))
      LISTEN  0       4096                 *:10256              *:*     users:(("kube-proxy",pid=899,fd=16))
      LISTEN  0       4096                 *:10250              *:*     users:(("kubelet",pid=706,fd=22))
    

01. Pre requisites

  • 실습용 Vagrant배포의 경우 리소스 문제로 다른 PC에서 진행한다.

    
      PS C:\Users\bom\Desktop\스터디\onpremisk8s> dir
    
          디렉터리: C:\Users\bom\Desktop\스터디\onpremisk8s
    
      Mode                 LastWriteTime         Length Name
      ----                 -------------         ------ ----
      -a----      2026-01-05  오후 11:09           1172 init_cfg.sh
      -a----      2026-01-05  오후 11:08           3234 Vagrantfile
    
      PS C:\Users\bom\Desktop\스터디\onpremisk8s> vagrant.exe up
      Bringing machine 'jumpbox' up with 'virtualbox' provider...
      Bringing machine 'server' up with 'virtualbox' provider...
      Bringing machine 'node-0' up with 'virtualbox' provider...
      Bringing machine 'node-1' up with 'virtualbox' provider...
      ==> jumpbox: Box 'bento/debian-12' could not be found. Attempting to find and install...
          jumpbox: Box Provider: virtualbox
          jumpbox: Box Version: 202510.26.0
      ==> jumpbox: Loading metadata for box 'bento/debian-12'
          jumpbox: URL: https://vagrantcloud.com/api/v2/vagrant/bento/debian-12
      ==> jumpbox: Adding box 'bento/debian-12' (v202510.26.0) for provider: virtualbox (amd64)
          jumpbox: Downloading: https://vagrantcloud.com/bento/boxes/debian-12/versions/202510.26.0/providers/virtualbox/amd64/vagrant.box
    
      ############################중략############################################    
    
      #배포 가상머신 확인   
         PS C:\Users\bom\Desktop\스터디\onpremisk8s> vagrant status
      Current machine states:
    
      jumpbox                   running (virtualbox)
      server                    running (virtualbox)
      node-0                    running (virtualbox)
      node-1                    running (virtualbox) 
  • jumpbox 가상머신 접속 : vagrant ssh jumpbox

      PS C:\Users\bom\Desktop\스터디\onpremisk8s> vagrant ssh jumpbox
      Linux jumpbox 6.1.0-40-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.153-1 (2025-09-20) x86_64
    
      This system is built by the Bento project by Chef Software
      More information can be found at https://github.com/chef/bento
    
      Use of this system is acceptance of the OS vendor EULA and License Agreements.
    
      The programs included with the Debian GNU/Linux system are free software;
      the exact distribution terms for each program are described in the
      individual files in /usr/share/doc/*/copyright.
    
      Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
      permitted by applicable law.
      root@jumpbox:~# whoami
      root
      root@jumpbox:~# pwd
      /root
      root@jumpbox:~#

02 - Set up The jumpbox

  • vagrant ssh jumpbox

      root@jumpbox:~# whoami
      root
      root@jumpbox:~# pwd
      /root
      root@jumpbox:~# cat /home/vagrant/.bashrc | tail -n 1
      sudo su -
      root@jumpbox:~# git clone --depth 1 https://github.com/kelseyhightower/kubernetes-the-hard-way.git
      Cloning into 'kubernetes-the-hard-way'...
      remote: Enumerating objects: 41, done.
      remote: Counting objects: 100% (41/41), done.
      remote: Compressing objects: 100% (40/40), done.
      remote: Total 41 (delta 3), reused 14 (delta 1), pack-reused 0 (from 0)
      Receiving objects: 100% (41/41), 29.27 KiB | 14.63 MiB/s, done.
      Resolving deltas: 100% (3/3), done.
      root@jumpbox:~# cd kubernetes-the-hard-way/
      root@jumpbox:~/kubernetes-the-hard-way# tree
      .
      ├── ca.conf
      ├── configs
      │   ├── 10-bridge.conf
      │   ├── 99-loopback.conf
      │   ├── containerd-config.toml
      │   ├── encryption-config.yaml
      │   ├── kube-apiserver-to-kubelet.yaml
      │   ├── kubelet-config.yaml
      │   ├── kube-proxy-config.yaml
      │   └── kube-scheduler.yaml
      ├── CONTRIBUTING.md
      ├── COPYRIGHT.md
      ├── docs
      │   ├── 01-prerequisites.md
      │   ├── 02-jumpbox.md
      │   ├── 03-compute-resources.md
      │   ├── 04-certificate-authority.md
      │   ├── 05-kubernetes-configuration-files.md
      │   ├── 06-data-encryption-keys.md
      │   ├── 07-bootstrapping-etcd.md
      │   ├── 08-bootstrapping-kubernetes-controllers.md
      │   ├── 09-bootstrapping-kubernetes-workers.md
      │   ├── 10-configuring-kubectl.md
      │   ├── 11-pod-network-routes.md
      │   ├── 12-smoke-test.md
      │   └── 13-cleanup.md
      ├── downloads-amd64.txt
      ├── downloads-arm64.txt
      ├── LICENSE
      ├── README.md
      └── units
          ├── containerd.service
          ├── etcd.service
          ├── kube-apiserver.service
          ├── kube-controller-manager.service
          ├── kubelet.service
          ├── kube-proxy.service
          └── kube-scheduler.service
    
      4 directories, 35 files
    
      #CPU 아키텍쳐 확인
    
      root@jumpbox:~/kubernetes-the-hard-way# dpkg --print-architecture
      amd64
      root@jumpbox:~/kubernetes-the-hard-way# ls -l downloads-*
      -rw-r--r-- 1 root root 839 Jan  5 23:25 downloads-amd64.txt
      -rw-r--r-- 1 root root 839 Jan  5 23:25 downloads-arm64.txt
    
      root@jumpbox:~/kubernetes-the-hard-way# cat downloads-$(dpkg --print-architecture).txt
      https://dl.k8s.io/v1.32.3/bin/linux/amd64/kubectl
      https://dl.k8s.io/v1.32.3/bin/linux/amd64/kube-apiserver
      https://dl.k8s.io/v1.32.3/bin/linux/amd64/kube-controller-manager
      https://dl.k8s.io/v1.32.3/bin/linux/amd64/kube-scheduler
      https://dl.k8s.io/v1.32.3/bin/linux/amd64/kube-proxy
      https://dl.k8s.io/v1.32.3/bin/linux/amd64/kubelet
      https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.32.0/crictl-v1.32.0-linux-amd64.tar.gz
      https://github.com/opencontainers/runc/releases/download/v1.3.0-rc.1/runc.amd64
      https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz
      https://github.com/containerd/containerd/releases/download/v2.1.0-beta.0/containerd-2.1.0-beta.0-linux-amd64.tar.gz
      https://github.com/etcd-io/etcd/releases/download/v3.6.0-rc.3/etcd-v3.6.0-rc.3-linux-amd64.tar.gz
      root@jumpbox:~/kubernetes-the-hard-way# wget -q --show-progress \
        --https-only \
        --timestamping \
        -P downloads \
        -i downloads-$(dpkg --print-architecture).txt
      kubectl                       100%[=================================================>]  54.67M  64.4MB/s    in 0.8s
      kube-apiserver                100%[=================================================>]  88.94M  28.8MB/s    in 3.1s
      kube-controller-manager       100%[=================================================>]  82.00M  17.7MB/s    in 4.6s
      kube-scheduler                100%[=================================================>]  62.79M  22.0MB/s    in 2.9s
      kube-proxy                    100%[=================================================>]  63.75M  25.1MB/s    in 2.5s
      kubelet                       100%[=================================================>]  73.82M  12.8MB/s    in 6.5s
      crictl-v1.32.0-linux-amd64.ta 100%[=================================================>]  18.21M  55.1MB/s    in 0.3s
      runc.amd64                    100%[=================================================>]  11.30M  66.3MB/s    in 0.2s
      cni-plugins-linux-amd64-v1.6. 100%[=================================================>]  50.35M  36.3MB/s    in 1.4s
      containerd-2.1.0-beta.0-linux 100%[=================================================>]  37.01M  56.7MB/s    in 0.7s
      etcd-v3.6.0-rc.3-linux-amd64. 100%[=================================================>]  22.48M  47.2MB/s    in 0.5s
    
      root@jumpbox:~/kubernetes-the-hard-way# ls -oh downloads
      total 566M
      -rw-r--r-- 1 root 51M Jan  7  2025 cni-plugins-linux-amd64-v1.6.2.tgz
      -rw-r--r-- 1 root 38M Mar 18  2025 containerd-2.1.0-beta.0-linux-amd64.tar.gz
      -rw-r--r-- 1 root 19M Dec  9  2024 crictl-v1.32.0-linux-amd64.tar.gz
      -rw-r--r-- 1 root 23M Mar 28  2025 etcd-v3.6.0-rc.3-linux-amd64.tar.gz
      -rw-r--r-- 1 root 89M Mar 12  2025 kube-apiserver
      -rw-r--r-- 1 root 83M Mar 12  2025 kube-controller-manager
      -rw-r--r-- 1 root 55M Mar 12  2025 kubectl
      -rw-r--r-- 1 root 74M Mar 12  2025 kubelet
      -rw-r--r-- 1 root 64M Mar 12  2025 kube-proxy
      -rw-r--r-- 1 root 63M Mar 12  2025 kube-scheduler
      -rw-r--r-- 1 root 12M Mar  4  2025 runc.amd64
    
      root@jumpbox:~/kubernetes-the-hard-way# mkdir -p downloads/{client,cni-plugins,controller,worker}
      root@jumpbox:~/kubernetes-the-hard-way# tree -d downloads
      downloads
      ├── client
      ├── cni-plugins
      ├── controller
      └── worker
    
      5 directories
    
      #압축풀기 
      root@jumpbox:~/kubernetes-the-hard-way# tar -xvf downloads/crictl-v1.32.0-linux-${ARCH}.tar.gz \
        -C downloads/worker/ && tree -ug downloads
      crictl
      [root     root    ]  downloads
      ├── [root     root    ]  client
      ├── [root     root    ]  cni-plugins
      ├── [root     root    ]  cni-plugins-linux-amd64-v1.6.2.tgz
      ├── [root     root    ]  containerd-2.1.0-beta.0-linux-amd64.tar.gz
      ├── [root     root    ]  controller
      ├── [root     root    ]  crictl-v1.32.0-linux-amd64.tar.gz
      ├── [root     root    ]  etcd-v3.6.0-rc.3-linux-amd64.tar.gz
      ├── [root     root    ]  kube-apiserver
      ├── [root     root    ]  kube-controller-manager
      ├── [root     root    ]  kubectl
      ├── [root     root    ]  kubelet
      ├── [root     root    ]  kube-proxy
      ├── [root     root    ]  kube-scheduler
      ├── [root     root    ]  runc.amd64
      └── [root     root    ]  worker
          └── [1001     127     ]  crictl
    
      5 directories, 12 files
    
      ################그외 중략 ########################
    
      #압축해제 후 확인진행
      root@jumpbox:~/kubernetes-the-hard-way# tree downloads/worker/
      downloads/worker/
      ├── containerd
      ├── containerd-shim-runc-v2
      ├── containerd-stress
      ├── crictl
      └── ctr
    
      1 directory, 5 files
      root@jumpbox:~/kubernetes-the-hard-way# tree downloads/cni-plugins
      downloads/cni-plugins
      ├── bandwidth
      ├── bridge
      ├── dhcp
      ├── dummy
      ├── firewall
      ├── host-device
      ├── host-local
      ├── ipvlan
      ├── LICENSE
      ├── loopback
      ├── macvlan
      ├── portmap
      ├── ptp
      ├── README.md
      ├── sbr
      ├── static
      ├── tap
      ├── tuning
      ├── vlan
      └── vrf
    
      1 directory, 20 files
      #파일 이동 및 확인
      root@jumpbox:~/kubernetes-the-hard-way# mv downloads/{etcdctl,kubectl} downloads/client/
      mv downloads/{etcd,kube-apiserver,kube-controller-manager,kube-scheduler} downloads/controller/
      mv downloads/{kubelet,kube-proxy} downloads/worker/
      mv downloads/runc.${ARCH} downloads/worker/runc
    
      root@jumpbox:~/kubernetes-the-hard-way# tree downloads/client/
      tree downloads/controller/
      tree downloads/worker/
      downloads/client/
      ├── etcdctl
      └── kubectl
    
      1 directory, 2 files
      downloads/controller/
      ├── etcd
      ├── kube-apiserver
      ├── kube-controller-manager
      └── kube-scheduler
    
      1 directory, 4 files
      downloads/worker/
      ├── containerd
      ├── containerd-shim-runc-v2
      ├── containerd-stress
      ├── crictl
      ├── ctr
      ├── kubelet
      ├── kube-proxy
      └── runc
    
      1 directory, 8 files
      #그외 작업 진행후 최종 kubtectl 확인
      root@jumpbox:~/kubernetes-the-hard-way# kubectl version --client
      Client Version: v1.32.3
      Kustomize Version: v5.5.0

03 - Provisioning Compute Resources

  • SSH 접속 환경 설정

      root@jumpbox:~/kubernetes-the-hard-way# cat <<EOF > machines.txt
      192.168.10.100 server.kubernetes.local server
      192.168.10.101 node-0.kubernetes.local node-0 10.200.0.0/24
      192.168.10.102 node-1.kubernetes.local node-1 10.200.1.0/24
      EOF
      root@jumpbox:~/kubernetes-the-hard-way# cat machines.txt
      192.168.10.100 server.kubernetes.local server
      192.168.10.101 node-0.kubernetes.local node-0 10.200.0.0/24
      192.168.10.102 node-1.kubernetes.local node-1 10.200.1.0/24
    
      root@jumpbox:~/kubernetes-the-hard-way# while read IP FQDN HOST SUBNET; do
        echo "${IP} ${FQDN} ${HOST} ${SUBNET}"
      done < machines.txt
      192.168.10.100 server.kubernetes.local server
      192.168.10.101 node-0.kubernetes.local node-0 10.200.0.0/24
      192.168.10.102 node-1.kubernetes.local node-1 10.200.1.0/24
    
      root@jumpbox:~/kubernetes-the-hard-way# ssh-keygen -t rsa -N "" -f /root/.ssh/id_rsa
      Generating public/private rsa key pair.
      Your identification has been saved in /root/.ssh/id_rsa
      Your public key has been saved in /root/.ssh/id_rsa.pub
      The key fingerprint is:
      SHA256:------------------------------------ root@jumpbox
      The key's randomart image is:
      +---[RSA 3072]----+
      |      .+o.B=*++++|
      |     oooEo Xo=oB |
      |      ooo =.+o*..|
      |     . o..o . .=.|
      |      ..So .   .o|
      |      . . .      |
      |       + o       |
      |      + +        |
      |     . .         |
      +----[SHA256]-----+
    
      root@jumpbox:~/kubernetes-the-hard-way# while read IP FQDN HOST SUBNET; do
        sshpass -p 'qwe123' ssh-copy-id -o StrictHostKeyChecking=no root@${IP}
      done < machines.txt
      /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
      /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
      /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
    
      Number of key(s) added: 1
    
      Now try logging into the machine, with:   "ssh -o 'StrictHostKeyChecking=no' 'root@192.168.10.100'"
      and check to make sure that only the key(s) you wanted were added.
    
      /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
      /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
      /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
    
      Number of key(s) added: 1
    
      Now try logging into the machine, with:   "ssh -o 'StrictHostKeyChecking=no' 'root@192.168.10.101'"
      and check to make sure that only the key(s) you wanted were added.
    
      /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
      /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
      /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
    
      Number of key(s) added: 1
    
      Now try logging into the machine, with:   "ssh -o 'StrictHostKeyChecking=no' 'root@192.168.10.102'"
      and check to make sure that only the key(s) you wanted were added.
    
      root@jumpbox:~/kubernetes-the-hard-way# while read IP FQDN HOST SUBNET; do
        ssh -n root@${IP} cat /root/.ssh/authorized_keys
      done < machines.txt
      ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCUoQb+fRLPJN03IsqZiCa0nxOuMA7Mo5VpVml+8XsCPa0JCoGxlHn5C8+xmPbp5qg4VhNVRVBIyt3/ipgURJbhgxw/Yo+3Tcq0C5BcmxzrDZGfK8mUAGXPWrHPDtECgZP4+kRyFcGMOwJCJvjTlbFXc/cDypp9RpDtbAXsBR/P+M9gYHtcAI2VRJMjHS0yTvFzf01WwoCYBWg5QG7NgIKVS2qMS75kAdnveBT+nu5E5TN2TmCi5vaD64LC1uuhg3NHDdUw14U0wAENNphQleERk0Y0jvcnFsf5XT6+KNYCfZjgkBPvBCcJRq2szo8Df740lGVoe8vWttAg79DkCB/QZV3UT+k+UN89gskAFtnWKv3MnSsAsjcevxSMsky3eEbZJ5lrN+NC32bTUkovb0paQUhDUf/gHsPQtwD/8FIolNWXopB2cZkwFZIkqs8JIV1N+C5fDczVsioJRgJmJQ53P/lliQXn82hlfe1/ZB+ZxO4mTHHHWC765b8dV0CVn2k= root@jumpbox
      ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCUoQb+fRLPJN03IsqZiCa0nxOuMA7Mo5VpVml+8XsCPa0JCoGxlHn5C8+xmPbp5qg4VhNVRVBIyt3/ipgURJbhgxw/Yo+3Tcq0C5BcmxzrDZGfK8mUAGXPWrHPDtECgZP4+kRyFcGMOwJCJvjTlbFXc/cDypp9RpDtbAXsBR/P+M9gYHtcAI2VRJMjHS0yTvFzf01WwoCYBWg5QG7NgIKVS2qMS75kAdnveBT+nu5E5TN2TmCi5vaD64LC1uuhg3NHDdUw14U0wAENNphQleERk0Y0jvcnFsf5XT6+KNYCfZjgkBPvBCcJRq2szo8Df740lGVoe8vWttAg79DkCB/QZV3UT+k+UN89gskAFtnWKv3MnSsAsjcevxSMsky3eEbZJ5lrN+NC32bTUkovb0paQUhDUf/gHsPQtwD/8FIolNWXopB2cZkwFZIkqs8JIV1N+C5fDczVsioJRgJmJQ53P/lliQXn82hlfe1/ZB+ZxO4mTHHHWC765b8dV0CVn2k= root@jumpbox
      ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCUoQb+fRLPJN03IsqZiCa0nxOuMA7Mo5VpVml+8XsCPa0JCoGxlHn5C8+xmPbp5qg4VhNVRVBIyt3/ipgURJbhgxw/Yo+3Tcq0C5BcmxzrDZGfK8mUAGXPWrHPDtECgZP4+kRyFcGMOwJCJvjTlbFXc/cDypp9RpDtbAXsBR/P+M9gYHtcAI2VRJMjHS0yTvFzf01WwoCYBWg5QG7NgIKVS2qMS75kAdnveBT+nu5E5TN2TmCi5vaD64LC1uuhg3NHDdUw14U0wAENNphQleERk0Y0jvcnFsf5XT6+KNYCfZjgkBPvBCcJRq2szo8Df740lGVoe8vWttAg79DkCB/QZV3UT+k+UN89gskAFtnWKv3MnSsAsjcevxSMsky3eEbZJ5lrN+NC32bTUkovb0paQUhDUf/gHsPQtwD/8FIolNWXopB2cZkwFZIkqs8JIV1N+C5fDczVsioJRgJmJQ53P/lliQXn82hlfe1/ZB+ZxO4mTHHHWC765b8dV0CVn2k= root@jumpbox
      root@jumpbox:~/kubernetes-the-hard-way# while read IP FQDN HOST SUBNET; do
        ssh -n root@${IP} hostname
      done < machines.txt
      server
      node-0
      node-1
    
      #확인진행
      root@jumpbox:~/kubernetes-the-hard-way# while read IP FQDN HOST SUBNET; do
        ssh -n root@${IP} hostname --fqdn
      done < machines.txt
      server.kubernetes.local
      node-0.kubernetes.local
      node-1.kubernetes.local
    
      root@jumpbox:~/kubernetes-the-hard-way# cat /etc/hosts
      while read IP FQDN HOST SUBNET; do
        sshpass -p 'qwe123' ssh -n -o StrictHostKeyChecking=no root@${HOST} hostname
      done < machines.txt
      127.0.0.1       localhost
    
      # The following lines are desirable for IPv6 capable hosts
      ::1     localhost ip6-localhost ip6-loopback
      ff02::1 ip6-allnodes
      ff02::2 ip6-allrouters
      192.168.10.10  jumpbox
      192.168.10.100 server.kubernetes.local server
      192.168.10.101 node-0.kubernetes.local node-0
      192.168.10.102 node-1.kubernetes.local node-1
      Warning: Permanently added 'server' (ED25519) to the list of known hosts.
      server
      Warning: Permanently added 'node-0' (ED25519) to the list of known hosts.
      node-0
      Warning: Permanently added 'node-1' (ED25519) to the list of known hosts.
      node-1

04 - Provisioning a CA and Generating TLS Certificates

  • 상호 TLS 인증(MTLS)를 사용하여 통신을 진행한다. 즉 통신하는 컴포넌트의 인증서는 전부 생성이 필요하며, 일반적인 k8s의 경우 kubeadm이 생성해준다. 아래는 생성해야하는 목록이다.
|  | 개인키 | CSR | 인증서 | 참고 정보 |
| --- | --- | --- | --- | --- |
| Root CA | ca.key | X | ca.crt |  |
| admin | admin.key | admin.csr | admin.crt | CN = admin, O = system:masters |
| node-0 | node-0.key | node-0.csr | node-0.crt | CN = system:node:node-0, O = system:nodes |
| node-1 | node-1.key | node-1.csr | node-1.crt | CN = system:node:node-1, O = system:nodes |
| kube-proxy | kube-proxy.key | kube-proxy.csr | kube-proxy.crt | CN = system:kube-proxy, O = system:node-proxier |
| kube-scheduler | kube-scheduler.key | kube-scheduler | kube-scheduler.crt | CN = system:kube-scheduler, O = system:kube-scheduler |
| kube-controller-manager | kube-controller-manager.key | kube-controller-manager.csr | kube-controller-manager.crt | CN = system:kube-controller-manager, O = system:kube-controller-manager |
| kube-api-server | kube-api-server.key | kube-api-server.csr | kube-api-server.crt | CN = kubernetes, SAN: IP(127.0.0.1, **10.32.0.1**), DNS(kubernetes,..) |
| service-accounts | service-accounts.key | service-accounts.csr | service-accounts.crt | CN = service-accounts |
- kind환경의 인증서 확인

    ```bash
    (⎈|kind-myk8s:default) zosys@4:~$ docker exec -i myk8s-control-plane kubeadm certs check-expiration
    [check-expiration] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
    [check-expiration] Use 'kubeadm init phase upload-config --config your-config.yaml' to re-upload it.

    CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
    admin.conf                 Jan 05, 2027 13:52 UTC   364d            ca                      no
    apiserver                  Jan 05, 2027 13:52 UTC   364d            ca                      no
    apiserver-etcd-client      Jan 05, 2027 13:52 UTC   364d            etcd-ca                 no
    apiserver-kubelet-client   Jan 05, 2027 13:52 UTC   364d            ca                      no
    controller-manager.conf    Jan 05, 2027 13:52 UTC   364d            ca                      no
    etcd-healthcheck-client    Jan 05, 2027 13:52 UTC   364d            etcd-ca                 no
    etcd-peer                  Jan 05, 2027 13:52 UTC   364d            etcd-ca                 no
    etcd-server                Jan 05, 2027 13:52 UTC   364d            etcd-ca                 no
    front-proxy-client         Jan 05, 2027 13:52 UTC   364d            front-proxy-ca          no
    scheduler.conf             Jan 05, 2027 13:52 UTC   364d            ca                      no
    super-admin.conf           Jan 05, 2027 13:52 UTC   364d            ca                      no

    CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
    ca                      Jan 03, 2036 13:52 UTC   9y              no
    etcd-ca                 Jan 03, 2036 13:52 UTC   9y              no
    front-proxy-ca          Jan 03, 2036 13:52 UTC   9y              no

    ```
  • CA 생성

      root@jumpbox:~/kubernetes-the-hard-way# openssl genrsa -out ca.key 4096
      root@jumpbox:~/kubernetes-the-hard-way# ls -l ca.key
      -rw------- 1 root root 3272 Jan  6 00:05 ca.key
      root@jumpbox:~/kubernetes-the-hard-way# openssl rsa -in ca.key -text -noout
      Private-Key: (4096 bit, 2 primes)
      modulus:
          00:bb:3b:8f:cf:85:b5:3e:48:2e:9a:ff:ce:c7:99:
          5b:3a:de:7b:28:47:1e:92:a5:a0:9b:b5:d1:fd:9e:
          10:af:49:b1:6b:12:2d:cb:78:cb:e0:a4:c5:9d:d4:
          b8:52:60:46:38:37:bb:f8:c7:a9:1e:d7:d5:55:82:
          30:8c:80:ae:66:8d:83:25:18:2a:21:d4:66:8c:db:
          3c:c2:4c:d0:e0:15:a4:b2:d0:1b:9d:ae:9d:9e:bd:
          2b:57:f1:b6:b8:f0:ad:9b:06:90:43:7e:8c:58:5d:
    
          #################중략#########################
    
      root@jumpbox:~/kubernetes-the-hard-way# openssl req -x509 -new -sha512 -noenc \
        -key ca.key -days 3653 \
        -config ca.conf \
        -out ca.crt
      root@jumpbox:~/kubernetes-the-hard-way# ls -l ca.crt
      -rw-r--r-- 1 root root 1899 Jan  6 00:06 ca.crt
    
      #인증서 전체내용 확인
      root@jumpbox:~/kubernetes-the-hard-way# openssl x509 -in ca.crt -text -noout | more
      Certificate:
          Data:
              Version: 3 (0x2)
              Serial Number:
                  21:a5:b4:64:7c:a3:3d:a7:a4:3f:40:14:05:e9:25:b9:21:a9:7e:be
              Signature Algorithm: sha512WithRSAEncryption
              Issuer: C = US, ST = Washington, L = Seattle, CN = CA
              Validity
                  Not Before: Jan  5 15:06:27 2026 GMT
                  Not After : Jan  6 15:06:27 2036 GMT
              Subject: C = US, ST = Washington, L = Seattle, CN = CA
              Subject Public Key Info:
                  Public Key Algorithm: rsaEncryption
                      Public-Key: (4096 bit)
      ############################중략 #################################
    • 참고) kind환경의 ca.crt 확인

      (⎈|kind-myk8s:default) zosys@4:~$ docker exec -i myk8s-control-plane cat /etc/kubernetes/pki/ca.crt | openssl x509 -text -noout
      Certificate:
        Data:
            Version: 3 (0x2)
            Serial Number: 8519439439972151590 (0x763b210860fe3126)
            Signature Algorithm: sha256WithRSAEncryption
            Issuer: CN = kubernetes
            Validity
                Not Before: Jan  5 13:47:29 2026 GMT
                Not After : Jan  3 13:52:29 2036 GMT
            Subject: CN = kubernetes
            Subject Public Key Info:
                Public Key Algorithm: rsaEncryption
                    Public-Key: (2048 bit)
                    Modulus:
                        00:d4:a5:b1:57:06:bc:c7:ff:85:ba:d6:44:74:8c:
                        0a:a4:3e:3c:c7:e7:15:09:f9:31:f1:a0:8e:92:d0:
                        90:f8:19:52:98:8e:29:92:17:37:8d:9d:fe:10:fa:
                        d7:0f:1e:41:89:53:62:18:d9:b5:c4:b7:43:50:e6:
                        5c:fd:2c:87:46:a7:54:b0:ea:ea:6d:f6:d8:93:cc:
                        1f:2f:7a:18:28:6e:17:e7:e2:e8:3b:26:c8:ae:3d:
      
      ### 중략#####
  • admin인증서 생성

      root@jumpbox:~/kubernetes-the-hard-way# openssl genrsa -out admin.key 4096
      root@jumpbox:~/kubernetes-the-hard-way# openssl req -new -key admin.key -sha256 \
        -config ca.conf -section admin \
        -out admin.csr
      root@jumpbox:~/kubernetes-the-hard-way# ls -l admin.csr
      -rw-r--r-- 1 root root 1830 Jan  6 00:14 admin.csr
    
      root@jumpbox:~/kubernetes-the-hard-way# openssl req -in admin.csr -text -noout | more
      Certificate Request:
          Data:
              Version: 1 (0x0)
              Subject: CN = admin, O = system:masters
              Subject Public Key Info:
                  Public Key Algorithm: rsaEncryption
                      Public-Key: (4096 bit)
                      Modulus:
                          00:93:3b:e0:7a:87:5d:80:69:b1:ad:6a:38:74:55:
                          c0:a5:d4:5f:f6:cc:9d:b7:3c:6a:dc:7c:f1:fd:cc:
                          c1:dd:00:27:5e:6c:7f:2d:77:81:02:7c:ce:55:36:
                          79:75:dc:83:bf:60:23:67:df:32:53:cc:19:15:0e:
                          04:23:7f:d7:79:da:43:79:a9:57:9d:8e:dd:96:d2:
        ### 중략####
    
      root@jumpbox:~/kubernetes-the-hard-way# openssl x509 -req -days 3653 -in admin.csr \
        -copy_extensions copyall \
        -sha256 -CA ca.crt \
        -CAkey ca.key \
        -CAcreateserial \
        -out admin.crt
      Certificate request self-signature ok
      subject=CN = admin, O = system:masters
      root@jumpbox:~/kubernetes-the-hard-way# ls -l admin.crt
      openssl x509 -in admin.crt -text -noout | more
      -rw-r--r-- 1 root root 2021 Jan  6 00:16 admin.crt
      Certificate:
          Data:
              Version: 3 (0x2)
              Serial Number:
                  36:ea:c4:ee:d2:b3:c2:0c:0d:4c:4a:26:a3:56:3b:e1:a2:22:fa:c7
              Signature Algorithm: sha256WithRSAEncryption
              Issuer: C = US, ST = Washington, L = Seattle, CN = CA
              Validity
                  Not Before: Jan  5 15:16:49 2026 GMT
                  Not After : Jan  6 15:16:49 2036 GMT
              Subject: CN = admin, O = system:masters
              Subject Public Key Info:
                  Public Key Algorithm: rsaEncryption
                      Public-Key: (4096 bit)
                      Modulus:
                          00:93:3b:e0:7a:87:5d:80:69:b1:ad:6a:38:74:55:
                          c0:a5:d4:5f:f6:cc:9d:b7:3c:6a:dc:7c:f1:fd:cc:
                          c1:dd:00:27:5e:6c:7f:2d:77:81:02:7c:ce:55:36:
                          79:75:dc:83:bf:60:23:67:df:32:53:cc:19:15:0e:
                          04:23:7f:d7:79:da:43:79:a9:57:9d:8e:dd:96:d2:
                          2f:74:8a:83:41:80:01:21:f3:96:63:87:b6:27:65:
                          b0:e4:aa:5d:1d:31:a4:15:01:c5:98:66:ec:5a:9e:
                          27:78:2d:e8:2f:d1:e6:20:d6:15:d4:1e:43:2a:77:
          ### 중략####
    
    • 참고 ) kind환경의 admin인증서 확인

        (⎈|kind-myk8s:default) zosys@4:~$ docker exec -i myk8s-control-plane cat /etc/kubernetes/super-admin.conf | grep client-certificate-data | cut -d ':' -f2 | tr -d ' ' | base64 -d | openssl x509 -text -noout
        Certificate:
            Data:
                Version: 3 (0x2)
                Serial Number: 1007749187623256237 (0xdfc3e8bfef900ad)
                Signature Algorithm: sha256WithRSAEncryption
                Issuer: CN = kubernetes
                Validity
                    Not Before: Jan  5 13:47:29 2026 GMT
                    Not After : Jan  5 13:52:29 2027 GMT
                Subject: O = system:masters, CN = kubernetes-super-admin
                Subject Public Key Info:
                    Public Key Algorithm: rsaEncryption
                        Public-Key: (2048 bit)
                        Modulus:
                            00:a4:12:67:9f:3d:22:5b:a0:f8:0c:05:5c:d0:11:
                            2c:cb:98:55:7e:d8:84:a9:cc:39:6d:89:c0:c2:12:
                            60:e1:32:ed:28:a4:33:2d:67:89:20:0e:f9:c1:d6:
                            bb:08:a7:9e:ec:f5:0a:de:9c:ca:ea:ed:82:da:50:
                            35:d7:92:2c:85:f0:df:2c:e3:d1:7f:ca:e0:52:32:
      
        # krew rbac-tool 플러그인 활용하기
        (⎈|kind-myk8s:default) zosys@4:~$ kubectl rbac-tool lookup system:masters
          SUBJECT        | SUBJECT TYPE | SCOPE       | NAMESPACE | ROLE          | BINDING
        -----------------+--------------+-------------+-----------+---------------+----------------
          system:masters | Group        | ClusterRole |           | cluster-admin | cluster-admin
      
        (⎈|kind-myk8s:default) zosys@4:~$ kubectl describe clusterroles cluster-admin
        Name:         cluster-admin
        Labels:       kubernetes.io/bootstrapping=rbac-defaults
        Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
        PolicyRule:
          Resources  Non-Resource URLs  Resource Names  Verbs
          ---------  -----------------  --------------  -----
          *.*        []                 []              [*]
                     [*]                []              [*]
      
        (⎈|kind-myk8s:default) zosys@4:~$ kubectl describe clusterrolebindings cluster-admin
        Name:         cluster-admin
        Labels:       kubernetes.io/bootstrapping=rbac-defaults
        Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
        Role:
          Kind:  ClusterRole
          Name:  cluster-admin
        Subjects:
          Kind   Name            Namespace
          ----   ----            ---------
          Group  system:masters
      
  • 나머지 인증서 생성

      #인증서 생성전 오타 변경
      root@jumpbox:~/kubernetes-the-hard-way# sed -i 's/system:system:kube-scheduler/system:kube-scheduler/' ca.conf
      root@jumpbox:~/kubernetes-the-hard-way# cat ca.conf | grep system:kube-scheduler
      CN = system:kube-scheduler
      O  = system:kube-scheduler
    
      root@jumpbox:~/kubernetes-the-hard-way# cat ca.conf | grep system:kube-scheduler
      CN = system:kube-scheduler
      O  = system:kube-scheduler
      root@jumpbox:~/kubernetes-the-hard-way# certs=(
        "node-0" "node-1"
        "kube-proxy" "kube-scheduler"
        "kube-controller-manager"
        "kube-api-server"
        "service-accounts"
      )
      root@jumpbox:~/kubernetes-the-hard-way# echo ${certs[*]}
      node-0 node-1 kube-proxy kube-scheduler kube-controller-manager kube-api-server service-accounts
    
      root@jumpbox:~/kubernetes-the-hard-way# for i in ${certs[*]}; do
        openssl genrsa -out "${i}.key" 4096
    
        openssl req -new -key "${i}.key" -sha256 \
          -config "ca.conf" -section ${i} \
          -out "${i}.csr"
    
        openssl x509 -req -days 3653 -in "${i}.csr" \
          -copy_extensions copyall \
          -sha256 -CA "ca.crt" \
          -CAkey "ca.key" \
          -CAcreateserial \
          -out "${i}.crt"
      done
      Certificate request self-signature ok
      subject=CN = system:node:node-0, O = system:nodes, C = US, ST = Washington, L = Seattle
      Certificate request self-signature ok
      subject=CN = system:node:node-1, O = system:nodes, C = US, ST = Washington, L = Seattle
      Certificate request self-signature ok
      subject=CN = system:kube-proxy, O = system:node-proxier, C = US, ST = Washington, L = Seattle
      Certificate request self-signature ok
      subject=CN = system:kube-scheduler, O = system:kube-scheduler, C = US, ST = Washington, L = Seattle
      Certificate request self-signature ok
      subject=CN = system:kube-controller-manager, O = system:kube-controller-manager, C = US, ST = Washington, L = Seattle
      Certificate request self-signature ok
      subject=CN = kubernetes, C = US, ST = Washington, L = Seattle
      Certificate request self-signature ok
      subject=CN = service-accounts
    • 참고 ) kind k8s 인증서 보기

        (⎈|kind-myk8s:default) zosys@4:~$ docker exec -it myk8s-control-plane ls -al /etc/kubernetes
        total 60
        drwxr-xr-x 1 root root 4096 Jan  5 13:52 .
        drwxr-xr-x 1 root root 4096 Jan  5 13:52 ..
        -rw------- 1 root root 5643 Jan  5 13:52 admin.conf
        -rw------- 1 root root 5658 Jan  5 13:52 controller-manager.conf
        -rw------- 1 root root 2007 Jan  5 13:52 kubelet.conf
        drwxr-xr-x 1 root root 4096 Jan  5 13:52 manifests
        drwxr-xr-x 3 root root 4096 Jan  5 13:52 pki
        -rw------- 1 root root 5602 Jan  5 13:52 scheduler.conf
        -rw------- 1 root root 5663 Jan  5 13:52 super-admin.conf
        (⎈|kind-myk8s:default) zosys@4:~$ docker exec -it myk8s-control-plane ls -al /var/lib/kubelet/pki
        total 20
        drwxr-xr-x 2 root root 4096 Jan  5 13:52 .
        drwx------ 9 root root 4096 Jan  5 13:52 ..
        -rw------- 1 root root 2839 Jan  5 13:52 kubelet-client-2026-01-05-13-52-31.pem
        lrwxrwxrwx 1 root root   59 Jan  5 13:52 kubelet-client-current.pem -> /var/lib/kubelet/pki/kubelet-client-2026-01-05-13-52-31.pem
        -rw-r--r-- 1 root root 2343 Jan  5 13:52 kubelet.crt
        -rw------- 1 root root 1679 Jan  5 13:52 kubelet.key
      
        ## API서버에 Alternative Name확인
        (⎈|kind-myk8s:default) zosys@4:~$ docker exec -i myk8s-control-plane cat /etc/kubernetes/pki/apiserver.crt | openssl x509 -text -noout
        Certificate:
            Data:
                Version: 3 (0x2)
                Serial Number: 8699350173748144468 (0x78ba4ce452fda954)
                Signature Algorithm: sha256WithRSAEncryption
                Issuer: CN = kubernetes
                Validity
                    Not Before: Jan  5 13:47:29 2026 GMT
                    Not After : Jan  5 13:52:29 2027 GMT
                Subject: CN = kube-apiserver
                Subject Public Key Info:
                    Public Key Algorithm: rsaEncryption
                        Public-Key: (2048 bit)
                        Modulus:
                            00:a5:26:b2:7b:33:e3:a8:c6:01:d5:ba:26:ba:e9:
                            2b:70:58:c1:0b:e3:35:a3:96:d1:de:c5:7e:80:44:
                            0b:61:af:12:1a:dd:e5:83:ed:88:bb:be:c7:c0:6f:
                            05:71:9b:4b:82:49:12:23:a0:46:44:91:ef:68:49:
                            12:45:26:2a:07:28:38:bd:33:c0:76:61:cb:51:af:
                            18:c9:4c:96:a6:db:98:e0:8c:82:50:2f:8a:3e:ed:
                            79:f3:d7:b6:89:45:9e:d2:fb:2c:0a:b2:1f:14:fa:
                            fa:f1:29:cb:5c:2b:d2:26:81:50:e7:0f:98:57:9c:
                            20:90:89:d3:d1:7b:d7:2f:c7:a6:a3:aa:b0:9b:f8:
                            78:c4:57:73:fb:82:a8:9d:1f:c6:c6:38:67:24:49:
                            4f:0f:cb:d7:61:f6:5d:0c:89:cf:b8:01:c6:af:af:
                            51:91:12:b8:57:e0:ab:13:30:c7:a5:1f:a8:24:49:
                            85:1e:e1:8c:d1:19:f8:68:2f:be:b3:eb:37:79:e5:
                            5f:b1:85:78:9e:05:a3:dd:b2:c2:92:03:1f:e1:a3:
                            39:f8:b5:9f:23:b2:b2:1a:c4:05:3a:3e:6c:17:3f:
                            86:94:47:b6:a3:36:87:3e:59:3c:40:06:25:11:a3:
                            26:8f:02:da:cd:c7:00:d0:ca:db:71:75:41:a6:f3:
                            5f:03
                        Exponent: 65537 (0x10001)
                X509v3 extensions:
                    X509v3 Key Usage: critical
                        Digital Signature, Key Encipherment
                    X509v3 Extended Key Usage:
                        TLS Web Server Authentication
                    X509v3 Basic Constraints: critical
                        CA:FALSE
                    X509v3 Authority Key Identifier:
                        F3:A5:5A:DF:0E:70:F9:F5:ED:2C:DC:76:A8:34:22:CF:A4:3A:64:F1
                    X509v3 Subject Alternative Name:
                        DNS:**kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, DNS:localhost, DNS:myk8s-control-plane, IP Address:10.96.0.1, IP Address:172.18.0.3, IP Address:127.0.0.1**
            Signature Algorithm: sha256WithRSAEncryption
            Signature Value:
                5c:9e:b9:6f:e7:42:51:84:81:e0:f2:82:be:f8:d9:07:62:8b:
                51:23:0c:56:8f:f0:7a:43:e5:d7:93:b6:2a:0b:ba:98:55:9b:
                81:fd:2a:52:0a:e1:7d:7a:ec:bb:02:dd:d1:72:64:28:ba:d0:
                5a:50:4c:ca:f0:c4:3b:13:c7:9f:04:df:d5:5d:6f:9b:d7:bf:
                18:c5:b4:a3:7c:af:b5:bb:ae:ad:b3:c9:88:ca:6d:25:6c:86:
                5f:c8:d6:cb:ae:fa:2a:d1:ba:43:04:68:7f:78:78:75:9e:9a:
                54:cb:1d:00:f8:f8:91:9e:4b:2c:cb:bd:b7:15:51:1c:c5:80:
      
        (⎈|kind-myk8s:default) zosys@4:~$ kubectl describe pod -n kube-system kube-apiserver-myk8s-control-plane
        Name:                 kube-apiserver-myk8s-control-plane
        Namespace:            kube-system
        Priority:             2000001000
        #####################중략 후 인증서정보 확인############################ 
              --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
              --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
              --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
              --etcd-servers=https://127.0.0.1:2379
              --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
              --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
              --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
              --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
              --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
              --requestheader-allowed-names=front-proxy-client
              --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
      
  • Distribute the Client and Server Certificates(인증서 배포)

      root@jumpbox:~/kubernetes-the-hard-way# for host in node-0 node-1; do
        ssh root@${host} mkdir /var/lib/kubelet/
    
        scp ca.crt root@${host}:/var/lib/kubelet/
    
        scp ${host}.crt \
          root@${host}:/var/lib/kubelet/kubelet.crt
    
        scp ${host}.key \
          root@${host}:/var/lib/kubelet/kubelet.key
      done
      ca.crt                                                                                100% 1899     1.3MB/s   00:00
      node-0.crt                                                                            100% 2147     1.1MB/s   00:00
      node-0.key                                                                            100% 3272     1.9MB/s   00:00
      ca.crt                                                                                100% 1899     1.4MB/s   00:00
      node-1.crt                                                                            100% 2147     1.1MB/s   00:00
      node-1.key                                                                            100% 3268     2.0MB/s   00:00
      root@jumpbox:~/kubernetes-the-hard-way# ssh node-0 ls -l /var/lib/kubelet
      ssh node-1 ls -l /var/lib/kubelet
      total 12
      -rw-r--r-- 1 root root 1899 Jan  6 00:37 ca.crt
      -rw-r--r-- 1 root root 2147 Jan  6 00:37 kubelet.crt
      -rw------- 1 root root 3272 Jan  6 00:37 kubelet.key
      total 12
      -rw-r--r-- 1 root root 1899 Jan  6 00:37 ca.crt
      -rw-r--r-- 1 root root 2147 Jan  6 00:37 kubelet.crt
      -rw------- 1 root root 3268 Jan  6 00:37 kubelet.key
    
      root@jumpbox:~/kubernetes-the-hard-way# scp \
        ca.key ca.crt \
        kube-api-server.key kube-api-server.crt \
        service-accounts.key service-accounts.crt \
        root@server:~/
      ca.key                                                                                100% 3272     1.3MB/s   00:00
      ca.crt                                                                                100% 1899     1.4MB/s   00:00
      kube-api-server.key                                                                   100% 3272     2.4MB/s   00:00
      kube-api-server.crt                                                                   100% 2354     1.8MB/s   00:00
      service-accounts.key                                                                  100% 3268     2.2MB/s   00:00
      service-accounts.crt                                                                  100% 2004     1.4MB/s   00:00
      root@jumpbox:~/kubernetes-the-hard-way# ssh server ls -l /root
      total 24
      -rw-r--r-- 1 root root 1899 Jan  6 00:38 ca.crt
      -rw------- 1 root root 3272 Jan  6 00:38 ca.key
      -rw-r--r-- 1 root root 2354 Jan  6 00:38 kube-api-server.crt
      -rw------- 1 root root 3272 Jan  6 00:38 kube-api-server.key
      -rw-r--r-- 1 root root 2004 Jan  6 00:38 service-accounts.crt
      -rw------- 1 root root 3268 Jan  6 00:38 service-accounts.key

05 - Generating Kubernetes Configuration Files for Authentication

  • The kubelet Kubernetes Configuration File
    kind환경의 kubelet설정 확인하기

      (⎈|kind-myk8s:default) zosys@4:~$ kubectl describe pod -n kube-system kube-apiserver-myk8s-control-plane
      Name:                 kube-apiserver-myk8s-control-plane
      Namespace:            kube-system
      Priority:             2000001000
      Priority Class Name:  system-node-critical
      Node:                 myk8s-control-plane/172.18.0.3
      Start Time:           Mon, 05 Jan 2026 22:52:38 +0900
      Labels:               component=kube-apiserver
                            tier=control-plane
      Annotations:          kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 172.18.0.3:6443
                            kubernetes.io/config.hash: 6dadff8c64cc62e7b3846733d2478bdd
                            kubernetes.io/config.mirror: 6dadff8c64cc62e7b3846733d2478bdd
                            kubernetes.io/config.seen: 2026-01-05T13:52:38.459844636Z
                            kubernetes.io/config.source: file
      Status:               Running
      SeccompProfile:       RuntimeDefault
      IP:                   172.18.0.3
      IPs:
        IP:           172.18.0.3
      Controlled By:  Node/myk8s-control-plane
      Containers:
        kube-apiserver:
          Container ID:  containerd://872e95b24c42f80855f00ffda199192af35f4b24ca9ee16587cfa03e13c692fe
          Image:         registry.k8s.io/kube-apiserver:v1.32.8
          Image ID:      sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1
          Port:          <none>
          Host Port:     <none>
          Command:
            kube-apiserver
            --advertise-address=172.18.0.3
            --allow-privileged=true
            --authorization-mode=Node,RBAC
    
      #kubeconfig 생성 및 확인
    
      root@jumpbox:~/kubernetes-the-hard-way# kubectl config set-cluster kubernetes-the-hard-way \
        --certificate-authority=ca.crt \
        --embed-certs=true \
        --server=https://server.kubernetes.local:6443 \
        --kubeconfig=node-0.kubeconfig && ls -l node-0.kubeconfig && cat node-0.kubeconfig
      Cluster "kubernetes-the-hard-way" set.
    
      #############################중략########################################
    
      root@jumpbox:~/kubernetes-the-hard-way# ls -l *.kubeconfig
      -rw------- 1 root root 10161 Jan  7 23:02 node-0.kubeconfig
      -rw------- 1 root root 10157 Jan  7 23:03 node-1.kubeconfig
    
      #kind의 설정확인
    
      (⎈|kind-myk8s:default) zosys@4:~$ cat .kube/config
      apiVersion: v1
      clusters:
          - cluster:
              certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJZGpzaENHRCtNU1l3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TmpBeE1EVXhNelEzTWpsYUZ3MHpOakF4TURNeE16VXlNamxhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUURVcGJGWEJyekgvNFc2MWtSMGpBcWtQanpINXhVSitUSHhvSTZTMEpENEdWS1lqaW1TRnplTm5mNFEKK3RjUEhrR0pVMklZMmJYRXQwTlE1bHo5TElkR3AxU3c2dXB0OXRpVHpCOHZlaGdvYmhmbjR1ZzdKc2l1UFNldwpYY2xnSkNjc0YvNDM5dGtrRVlETXVpSDFFVTZ6QXJ3YkJjSkVHWHk1Z3JrOHpQOG56N245U1UrSlNVOTR2S0RCCjFKb0hqS1doQnhtT3RiczJkSTByenFUcUVxRkRHWGo2b09HejdheGU4YUhBSDlVQ1IySUdFRnVyNHY3c1pTN2gKQ1AydllESEdCT1BXRS9vV20zSVJMZ0xVRnRzbVdSd0hqSTZWVmFyTlVqSXh2dzF0dFBlVDJSYUdRb0dJcExIUQpTdWFoT2NUV1VESkdnd2pCd3owSk80R2xGdnY1QWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJUenBWcmZEbkQ1OWUwczNIYW9OQ0xQcERwazhUQVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQmlEMWRkZDkwUQpaS2UvU2pNbTQwYVVtZlU2YVhFNG9DMndJT1RwMlkxNjBobGMrUXF6aUZEdktGY3k2MjVzWEpRcDRmb2lBajFtCmxkWGdLcTZTVDM1aVFqdGU0OGhYNG95bzNXQ1lYYXYvWkJDVmZoZ2pBTmlKelAwUTAzc2lTdVk2RlNxTDBHTysKZktncEVGS3luclFKdmZ6ckVmU3gzTWJqRTdYOEk5QVZmUTUxLzhFNEVyb1JQUzBVRXZGbGdiUCtwWWQveTZsVgpUTm8rMjFCN0V0OThhU2Y5V3lrQnZpMTZZeUhGckJPOUkwRGdtNlFxSVl0QXd6S1N5dmRCTitXVk12UitHSFBmCkVwY3VONUcyQ2x6dUd5aTJNU0VCRzhnRDNnL29uSjQwL3p3MHdTMU1JT1lweU5FMnhtNExhYll3c3dqOWVBZkQKQ0hkbndaOFdLMU15Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
              server: https://127.0.0.1:34991
            name: kind-myk8s
    
      #(참고) kind system:node관련 정보
      (⎈|kind-myk8s:default) zosys@4:~$ kubectl describe clusterroles system:node
      Name:         system:node
      Labels:       kubernetes.io/bootstrapping=rbac-defaults
      Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
      PolicyRule:
        Resources                                       Non-Resource URLs  Resource Names  Verbs
        ---------                                       -----------------  --------------  -----
        leases.coordination.k8s.io                      []                 []              [create delete get patch update]
        csinodes.storage.k8s.io                         []                 []              [create delete get patch update]
        nodes                                           []                 []              [create get list watch patch update]
        certificatesigningrequests.certificates.k8s.io  []                 []              [create get list watch]
        events                                          []                 []              [create patch update]
        pods/eviction                                   []                 []              [create]
        serviceaccounts/token                           []                 []              [create]
        tokenreviews.authentication.k8s.io              []                 []              [create]
        localsubjectaccessreviews.authorization.k8s.io  []                 []              [create]
        subjectaccessreviews.authorization.k8s.io       []                 []              [create]
        pods                                            []                 []              [get list watch create delete]
        configmaps                                      []                 []              [get list watch]
        secrets                                         []                 []              [get list watch]
        services                                        []                 []              [get list watch]
        runtimeclasses.node.k8s.io                      []                 []              [get list watch]
        csidrivers.storage.k8s.io                       []                 []              [get list watch]
        persistentvolumeclaims/status                   []                 []              [get patch update]
        endpoints                                       []                 []              [get]
        persistentvolumeclaims                          []                 []              [get]
        persistentvolumes                               []                 []              [get]
        volumeattachments.storage.k8s.io                []                 []              [get]
        nodes/status                                    []                 []              [patch update]
        pods/status                                     []                 []              [patch update]
    
      #####################################중략###########################################
    
  • kube-proxy configfile생성

      root@jumpbox:~/kubernetes-the-hard-way# cat kube-proxy.kubeconfig
      apiVersion: v1
      clusters
      - cluster:
          certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tL
          ######중략######
  • kube-controller-manager configfile 생성

      root@jumpbox:~/kubernetes-the-hard-way# cat kube-controller-manager.kubeconfig
      apiVersion: v1
      clusters:
      - cluster:
          certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0URS0
          ######중략######
  • kube-scheduler configfile생성

      root@jumpbox:~/kubernetes-the-hard-way# cat kube-scheduler.kubeconfig
      apiVersion: v1
      clusters:
      - cluster:
          certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUZURENDQXpTZ0F3SUJBZ0lVSWFXMFpIeWpQYWVrUDBBVUJla2x1U0dwZnI0d0RRWUpLb1pJaHZjTkFRRU4KQlFBd1FURUxNQWtHQTFVRUJoTUNWVk14RXpBUkJnTlZCQWdNQ2xkaGMyaHBibWQwYjI0eEVEQU9CZ05WQkFjTQpCMU5sWVhSMGJHVXhDekFKQmdOVkJBTU1Ba05CTUI0WERUSTJNREV3TlRFMU1EWXlOMW9YRFRNMk1ERXdOakUxCk1EWXlOMW93UVRFTE1Ba0dBMVVFQmhNQ1ZWTXhFekFSQmdOVkJBZ01DbGRoYzJocGJtZDBiMjR4RURBT0JnTl
    
          ######중략#######
  • admin configfile생성

      root@jumpbox:~/kubernetes-the-hard-way# cat admin.kubeconfig
      apiVersion: v1
      clusters:
      - cluster:
          certificate-authority-data: LS0tLS1CRUdJTiBDRV
    
          ######중략#########
  • Distribute the Kubernetes Configuration Files

      root@jumpbox:~/kubernetes-the-hard-way# ls -l *.kubeconfig
      -rw------- 1 root root  9953 Jan  7 23:12 admin.kubeconfig
      -rw------- 1 root root 10305 Jan  7 23:09 kube-controller-manager.kubeconfig
      -rw------- 1 root root 10187 Jan  7 23:08 kube-proxy.kubeconfig
      -rw------- 1 root root 10215 Jan  7 23:10 kube-scheduler.kubeconfig
      -rw------- 1 root root 10161 Jan  7 23:02 node-0.kubeconfig
      -rw------- 1 root root 10157 Jan  7 23:03 node-1.kubeconfig
    
      # 확인
      root@jumpbox:~/kubernetes-the-hard-way# ssh node-0 ls -l /var/lib/*/kubeconfig
      ssh node-1 ls -l /var/lib/*/kubeconfig
      -rw------- 1 root root 10161 Jan  7 23:13 /var/lib/kubelet/kubeconfig
      -rw------- 1 root root 10187 Jan  7 23:13 /var/lib/kube-proxy/kubeconfig
      -rw------- 1 root root 10157 Jan  7 23:13 /var/lib/kubelet/kubeconfig
      -rw------- 1 root root 10187 Jan  7 23:13 /var/lib/kube-proxy/kubeconfig
      ssh server ls -l /root/*.kubeconfig

06 - Generating the Data Encryption Config and Key

root@jumpbox:~/kubernetes-the-hard-way# cat configs/encryption-config.yaml
kind: EncryptionConfiguration
apiVersion: apiserver.config.k8s.io/v1
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: ${ENCRYPTION_KEY}
      - identity: {}


  root@jumpbox:~/kubernetes-the-hard-way# envsubst < configs/encryption-config.yaml > encryption-config.yaml
root@jumpbox:~/kubernetes-the-hard-way# cat encryption-config.yaml
kind: EncryptionConfiguration
apiVersion: apiserver.config.k8s.io/v1
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret:
      - identity: {}    

root@jumpbox:~/kubernetes-the-hard-way# scp encryption-config.yaml root@server:~/
encryption-config.yaml                                                                                  100%  227   183.1KB/s   00:00
root@jumpbox:~/kubernetes-the-hard-way# ssh server ls -l /root/encryption-config.yaml
-rw-r--r-- 1 root root 227 Jan  7 23:26 /root/encryption-config.yaml

07 - Bootstrapping the etcd Cluster

##etcd 기동
root@jumpbox:~/kubernetes-the-hard-way# cat units/etcd.service | grep controller
  --name controller \
  --initial-cluster controller=http://127.0.0.1:2380 \

  root@jumpbox:~/kubernetes-the-hard-way# ETCD_NAME=server
cat > units/etcd.service <<EOF
[Unit]
Description=etcd
Documentation=https://github.com/etcd-io/etcd

[Service]
Type=notify
ExecStart=/usr/local/bin/etcd \\
  --name ${ETCD_NAME} \\
  --initial-advertise-peer-urls http://127.0.0.1:2380 \\
  --listen-peer-urls http://127.0.0.1:2380 \\
  --listen-client-urls http://127.0.0.1:2379 \\
  --advertise-client-urls http://127.0.0.1:2379 \\
  --initial-cluster-token etcd-cluster-0 \\
  --initial-cluster ${ETCD_NAME}=http://127.0.0.1:2380 \\
  --initial-cluster-state new \\
  --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
root@jumpbox:~/kubernetes-the-hard-way# cat units/etcd.service | grep server
  --name server \
  --initial-cluster server=http://127.0.0.1:2380 \

root@jumpbox:~/kubernetes-the-hard-way# cat units/etcd.service | grep server
  --name server \
  --initial-cluster server=http://127.0.0.1:2380 \
root@jumpbox:~/kubernetes-the-hard-way# scp \
  downloads/controller/etcd \
  downloads/client/etcdctl \
  units/etcd.service \
  root@server:~/
etcd                                                                                                    100%   24MB  47.3MB/s   00:00
etcdctl                                                                                                 100%   16MB  63.4MB/s   00:00
etcd.service 
root@jumpbox:~/kubernetes-the-hard-way# ssh root@server

root@server:~# mv etcd etcdctl /usr/local/bin/
root@server:~# mkdir -p /etc/etcd /var/lib/etcd
root@server:~# chmod 700 /var/lib/etcd
root@server:~# cp ca.crt kube-api-server.key kube-api-server.crt /etc/etcd/
root@server:~# systemctl status etcd --no-pager
● etcd.service - etcd
     Loaded: loaded (/etc/systemd/system/etcd.service; enabled; preset: enabled)
     Active: active (running) since Wed 2026-01-07 23:29:51 KST; 4s ago
       Docs: https://github.com/etcd-io/etcd
   Main PID: 2607 (etcd)
      Tasks: 8 (limit: 2297)
     Memory: 10.4M
        CPU: 97ms            
 #######중략###########
 root@server:~# etcdctl member list
6702b0a34e2cfd39, started, server, http://127.0.0.1:2380, http://127.0.0.1:2379, false
root@server:~# etcdctl member list -w table
+------------------+---------+--------+-----------------------+-----------------------+------------+
|        ID        | STATUS  |  NAME  |      PEER ADDRS       |     CLIENT ADDRS      | IS LEARNER |
+------------------+---------+--------+-----------------------+-----------------------+------------+
| 6702b0a34e2cfd39 | started | server | http://127.0.0.1:2380 | http://127.0.0.1:2379 |      false |
+------------------+---------+--------+-----------------------+-----------------------+------------+
root@server:~# etcdctl endpoint status -w table
+----------------+------------------+------------+-----------------+---------+--------+-----------------------+-------+-----------+------------+-----------+------------+--------------------+--------+--------------------------+-------------------+
|    ENDPOINT    |        ID        |  VERSION   | STORAGE VERSION | DB SIZE | IN USE | PERCENTAGE NOT IN USE | QUOTA | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | DOWNGRADE TARGET VERSION | DOWNGRADE ENABLED |
+----------------+------------------+------------+-----------------+---------+--------+-----------------------+-------+-----------+------------+-----------+------------+--------------------+--------+--------------------------+-------------------+
| 127.0.0.1:2379 | 6702b0a34e2cfd39 | 3.6.0-rc.3 |           3.6.0 |   20 kB |  16 kB |                   20% |   0 B |      true |      false |         2 |          4 |                  4 |        |                          |             false |
+----------------+------------------+------------+-----------------+---------+--------+-----------------------+-------+-----------+------------+-----------+------------+--------------------+--------+--------------------------+-------------------+

08 - Bootstrapping the Kubernetes Control Plane

  • 설정파일 작성 및 server전달

      root@jumpbox:~/kubernetes-the-hard-way# cat ca.conf | grep '\[kube-api-server_alt_names' -A2
      [kube-api-server_alt_names]
      IP.0  = 127.0.0.1
      IP.1  = 10.32.0.1
    
      root@jumpbox:~/kubernetes-the-hard-way# cat units/kube-apiserver.service
      [Unit]
      Description=Kubernetes API Server
      Documentation=https://github.com/kubernetes/kubernetes
    
      [Service]
      ExecStart=/usr/local/bin/kube-apiserver \
        --allow-privileged=true \
        --apiserver-count=1 \
        --audit-log-maxage=30 \
        --audit-log-maxbackup=3 \
        --audit-log-maxsize=100 \
        --audit-log-path=/var/log/audit.log \
        --authorization-mode=Node,RBAC \
    
        ##########중략################
    
        root@jumpbox:~/kubernetes-the-hard-way# cat configs/kube-apiserver-to-kubelet.yaml ; echo
      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRole
      metadata:
        annotations:
          rbac.authorization.kubernetes.io/autoupdate: "true"
        labels:
          kubernetes.io/bootstrapping: rbac-defaults
        name: system:kube-apiserver-to-kubelet
      rules:
        - apiGroups:
            - ""
    
       #apiserver subject CN확인
       root@jumpbox:~/kubernetes-the-hard-way# openssl x509 -in kube-api-server.crt -text -noout
      Certificate:
          Data:
              Version: 3 (0x2)
              Serial Number:
                  36:ea:c4:ee:d2:b3:c2:0c:0d:4c:4a:26:a3:56:3b:e1:a2:22:fa:cd
              Signature Algorithm: sha256WithRSAEncryption
              Issuer: C = US, ST = Washington, L = Seattle, CN = CA
              Validity
                  Not Before: Jan  5 15:25:45 2026 GMT
                  Not After : Jan  6 15:25:45 2036 GMT
              **Subject: CN = kubernetes, C = US, ST = Washington, L = Seattle**
      ####중략#####
    
      # kube-scheduler
      root@jumpbox:~/kubernetes-the-hard-way# cat units/kube-scheduler.service ; echo
      [Unit]
      Description=Kubernetes Scheduler
      Documentation=https://github.com/kubernetes/kubernetes
    
      [Service]
      ExecStart=/usr/local/bin/kube-scheduler \
        --config=/etc/kubernetes/config/kube-scheduler.yaml \
        --v=2
      Restart=on-failure
      RestartSec=5
    
      [Install]
      WantedBy=multi-user.target
      root@jumpbox:~/kubernetes-the-hard-way# cat configs/kube-scheduler.yaml ; echo
      apiVersion: kubescheduler.config.k8s.io/v1
      kind: KubeSchedulerConfiguration
      clientConnection:
        kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
      leaderElection:
        leaderElect: true
    
        root@jumpbox:~/kubernetes-the-hard-way# cat units/kube-controller-manager.service ; echo
      [Unit]
      Description=Kubernetes Controller Manager
      Documentation=https://github.com/kubernetes/kubernetes
    
      [Service]
      ExecStart=/usr/local/bin/kube-controller-manager \
        --bind-address=0.0.0.0 \
        --cluster-cidr=10.200.0.0/16 \
        --cluster-name=kubernetes \
        --cluster-signing-cert-file=/var/lib/kubernetes/ca.crt \
        --cluster-signing-key-file=/var/lib/kubernetes/ca.key \
        --kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \
        --root-ca-file=/var/lib/kubernetes/ca.crt \
        --service-account-private-key-file=/var/lib/kubernetes/service-accounts.key \
        --service-cluster-ip-range=10.32.0.0/24 \
        --use-service-account-credentials=true \
        --v=2
      Restart=on-failure
      RestartSec=5
    
      [Install]
      WantedBy=multi-user.target
      #SCP전송후 확인
      root@jumpbox:~/kubernetes-the-hard-way# ssh server ls -l /root
      total 295436
      -rw------- 1 root root     9953 Jan  7 23:14 admin.kubeconfig
      -rw-r--r-- 1 root root     1899 Jan  6 00:38 ca.crt
      -rw------- 1 root root     3272 Jan  6 00:38 ca.key
      -rw-r--r-- 1 root root      227 Jan  7 23:26 encryption-config.yaml
      -rwxr-xr-x 1 root root 93261976 Jan  7 23:38 kube-apiserver
      -rw-r--r-- 1 root root     2354 Jan  6 00:38 kube-api-server.crt
      -rw------- 1 root root     3272 Jan  6 00:38 kube-api-server.key
      -rw-r--r-- 1 root root     1442 Jan  7 23:38 kube-apiserver.service
      -rw-r--r-- 1 root root      727 Jan  7 23:38 kube-apiserver-to-kubelet.yaml
      -rwxr-xr-x 1 root root 85987480 Jan  7 23:38 kube-controller-manager
      -rw------- 1 root root    10305 Jan  7 23:14 kube-controller-manager.kubeconfig
      -rw-r--r-- 1 root root      735 Jan  7 23:38 kube-controller-manager.service
      -rwxr-xr-x 1 root root 57323672 Jan  7 23:38 kubectl
      -rwxr-xr-x 1 root root 65843352 Jan  7 23:38 kube-scheduler
      -rw------- 1 root root    10215 Jan  7 23:14 kube-scheduler.kubeconfig
      -rw-r--r-- 1 root root      281 Jan  7 23:38 kube-scheduler.service
      -rw-r--r-- 1 root root      191 Jan  7 23:38 kube-scheduler.yaml
      -rw-r--r-- 1 root root     2004 Jan  6 00:38 service-accounts.crt
      -rw------- 1 root root     3268 Jan  6 00:38 service-accounts.key
  • Provision the Kubernetes Control Plane : kubectl 확인

      root@server:~# mkdir -p /etc/kubernetes/config
      root@server:~# mv kube-apiserver \
        kube-controller-manager \
        kube-scheduler kubectl \
        /usr/local/bin/
      root@server:~# ls -l /usr/local/bin/kube-*
      -rwxr-xr-x 1 root root 93261976 Jan  7 23:38 /usr/local/bin/kube-apiserver
      -rwxr-xr-x 1 root root 85987480 Jan  7 23:38 /usr/local/bin/kube-controller-manager
      -rwxr-xr-x 1 root root 65843352 Jan  7 23:38 /usr/local/bin/kube-scheduler
    
      # Configure the Kubernetes API Server 확인
    
      root@server:~# mkdir -p /var/lib/kubernetes/
      root@server:~# mv ca.crt ca.key \
        kube-api-server.key kube-api-server.crt \
        service-accounts.key service-accounts.crt \
        encryption-config.yaml \
        /var/lib/kubernetes/
      root@server:~# ls -l /var/lib/kubernetes/
      total 28
      -rw-r--r-- 1 root root 1899 Jan  6 00:38 ca.crt
      -rw------- 1 root root 3272 Jan  6 00:38 ca.key
      -rw-r--r-- 1 root root  227 Jan  7 23:26 encryption-config.yaml
      -rw-r--r-- 1 root root 2354 Jan  6 00:38 kube-api-server.crt
      -rw------- 1 root root 3272 Jan  6 00:38 kube-api-server.key
      -rw-r--r-- 1 root root 2004 Jan  6 00:38 service-accounts.crt
      -rw------- 1 root root 3268 Jan  6 00:38 service-accounts.key
    
      #파일 이동및 실행
      root@server:~# mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
      root@server:~# mv kube-controller-manager.service /etc/systemd/system/
      root@server:~# mv kube-scheduler.kubeconfig /var/lib/kubernetes/
      root@server:~# mv kube-scheduler.yaml /etc/kubernetes/config/
      root@server:~# mv kube-scheduler.service /etc/systemd/system/
      root@server:~# systemctl daemon-reload
      root@server:~# systemctl enable kube-apiserver kube-controller-manager kube-scheduler
      Created symlink /etc/systemd/system/multi-user.target.wants/kube-apiserver.service → /etc/systemd/system/kube-apiserver.service.
      Created symlink /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service → /etc/systemd/system/kube-controller-manager.service.
      Created symlink /etc/systemd/system/multi-user.target.wants/kube-scheduler.service → /etc/systemd/system/kube-scheduler.service.
      root@server:~# systemctl start  kube-apiserver kube-controller-manager kube-scheduler
    
      root@server:~# ss -tlp | grep kube
      LISTEN 0      4096               *:6443              *:*    users:(("kube-apiserver",pid=4929,fd=3))
      LISTEN 0      4096               *:10259             *:*    users:(("kube-scheduler",pid=2752,fd=3))
      LISTEN 0      4096               *:10257             *:*    users:(("kube-controller",pid=2751,fd=3))
    
      root@server:~# kubectl get clusterroles --kubeconfig admin.kubeconfig
      NAME                                                                   CREATED AT
      admin                                                                  2026-01-07T15:07:34Z
      cluster-admin                                                          2026-01-07T15:07:34Z
      edit                                                                   2026-01-07T15:07:34Z
      system:aggregate-to-admin                                              2026-01-07T15:07:34Z
      system:aggregate-to-edit                                               2026-01-07T15:07:34Z
      system:aggregate-to-view                                               2026-01-07T15:07:34Z
      system:auth-delegator                                                  2026-01-07T15:07:34Z
      system:basic-user                                                      2026-01-07T15:07:34Z
      system:certificates.k8s.io:certificatesigningrequests:nodeclient       2026-01-07T15:07:34Z
      system:certificates.k8s.io:certificatesigningrequests:selfnodeclient   2026-01-07T15:07:34Z
      system:certificates.k8s.io:kube-apiserver-client-approver              2026-01-07T15:07:34Z
  • RBAC for Kubelet Authorization

      root@server:~# cat kube-apiserver-to-kubelet.yaml
      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRole
      metadata:
        annotations:
          rbac.authorization.kubernetes.io/autoupdate: "true"
        labels:
          kubernetes.io/bootstrapping: rbac-defaults
        name: system:kube-apiserver-to-kubelet
      rules:
        - apiGroups:
            - ""
          resources:
            - nodes/proxy
            - nodes/stats
            - nodes/log
            - nodes/spec
            - nodes/metrics
          verbs:
            - "*"
    
    
root@server:~# kubectl get clusterroles system:kube-apiserver-to-kubelet --kubeconfig admin.kubeconfig
NAME                               CREATED AT
system:kube-apiserver-to-kubelet   2026-01-07T15:10:03Z
root@server:~# kubectl get clusterrolebindings system:kube-apiserver --kubeconfig admin.kubeconfig

NAME                    ROLE                                           AGE
system:kube-apiserver   ClusterRole/system:kube-apiserver-to-kubelet   18s      
```

- jumpbox 서버에서 컨트롤플레인 확인

```bash
root@jumpbox:~/kubernetes-the-hard-way# curl -s -k --cacert ca.crt https://server.kubernetes.local:6443/version | jq
{
  "major": "1",
  "minor": "32",
  "gitVersion": "v1.32.3",
  "gitCommit": "32cc146f75aad04beaaa245a7157eb35063a9f99",
  "gitTreeState": "clean",
  "buildDate": "2025-03-11T19:52:21Z",
  "goVersion": "go1.23.6",
  "compiler": "gc",
  "platform": "linux/amd64"
}
```

09 - Bootstrapping the Kubernetes Worker Nodes

  • 워커노드세팅을 위한 사전준비

      root@jumpbox:~/kubernetes-the-hard-way# cat configs/10-bridge.conf | jq
      {
        "cniVersion": "1.0.0",
        "name": "bridge",
        "type": "bridge",
        "bridge": "cni0",
        "isGateway": true,
        "ipMasq": true,
        "ipam": {
          "type": "host-local",
          "ranges": [
            [
              {
                "subnet": "SUBNET"
              }
            ]
          ],
          "routes": [
            {
              "dst": "0.0.0.0/0"
            }
          ]
        }
      }
    
      root@jumpbox:~/kubernetes-the-hard-way# cat configs/kubelet-config.yaml | yq
      {
        "kind": "KubeletConfiguration",
        "apiVersion": "kubelet.config.k8s.io/v1beta1",
        "address": "0.0.0.0",
        "authentication": {
          "anonymous": {
            "enabled": false
          },
          "webhook": {
            "enabled": true
          },
      #######중략#############
    
      #파일전달
      root@jumpbox:~/kubernetes-the-hard-way# ssh node-0 ls -l /root
      ssh node-1 ls -l /root
      total 8
      -rw-r--r-- 1 root root 265 Jan  8 00:18 10-bridge.conf
      -rw-r--r-- 1 root root 610 Jan  8 00:18 kubelet-config.yaml
      total 8
      -rw-r--r-- 1 root root 265 Jan  8 00:18 10-bridge.conf
      -rw-r--r-- 1 root root 610 Jan  8 00:18 kubelet-config.yaml
    
      root@jumpbox:~/kubernetes-the-hard-way# cat configs/99-loopback.conf ; echo
      cat configs/containerd-config.toml ; echo
      cat configs/kube-proxy-config.yaml ; echo
      {
        "cniVersion": "1.1.0",
        "name": "lo",
        "type": "loopback"
      }
      version = 2
    
      [plugins."io.containerd.grpc.v1.cri"]
        [plugins."io.containerd.grpc.v1.cri".containerd]
          snapshotter = "overlayfs"
          default_runtime_name = "runc"
        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
          runtime_type = "io.containerd.runc.v2"
        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
          SystemdCgroup = true
      [plugins."io.containerd.grpc.v1.cri".cni]
        bin_dir = "/opt/cni/bin"
        conf_dir = "/etc/cni/net.d"
      kind: KubeProxyConfiguration
      apiVersion: kubeproxy.config.k8s.io/v1alpha1
      clientConnection:
        kubeconfig: "/var/lib/kube-proxy/kubeconfig"
      mode: "iptables"
      clusterCIDR: "10.200.0.0/16"
    
      #파일전송 및 확인
      root@jumpbox:~/kubernetes-the-hard-way# ssh node-0 ls -l /root
      ssh node-1 ls -l /root
      ssh node-0 ls -l /root/cni-plugins
      ssh node-1 ls -l /root/cni-plugins
      total 358584
      -rw-r--r-- 1 root root      265 Jan  8 00:18 10-bridge.conf
      -rw-r--r-- 1 root root       65 Jan  8 00:18 99-loopback.conf
      drwxr-xr-x 2 root root     4096 Jan  8 00:19 cni-plugins
      -rwxr-xr-x 1 root root 58584656 Jan  8 00:18 containerd
      -rw-r--r-- 1 root root      470 Jan  8 00:18 containerd-config.toml
    
      #########중략##########
    
  • node0 세팅

      root@jumpbox:~/kubernetes-the-hard-way# ssh root@node-0
    
      root@node-0:~# mkdir -p \
        /etc/cni/net.d \
        /opt/cni/bin \
        /var/lib/kubelet \
        /var/lib/kube-proxy \
        /var/lib/kubernetes \
        /var/run/kubernetes
    
      root@node-0:~# mv 10-bridge.conf 99-loopback.conf /etc/cni/net.d/
      cat /etc/cni/net.d/10-bridge.conf
      {
        "cniVersion": "1.0.0",
        "name": "bridge",
        "type": "bridge",
        "bridge": "cni0",
        "isGateway": true,
        "ipMasq": true,
        "ipam": {
          "type": "host-local",
          "ranges": [
            [{"subnet": "10.200.0.0/24"}]
          ],
          "routes": [{"dst": "0.0.0.0/0"}]
        }
      }root@node-0:~#lsmod | grep netfilterr
      modprobe br-netfilter
      echo "br-netfilter" >> /etc/modules-load.d/modules.conf
      lsmod | grep netfilter
      br_netfilter           36864  0
      bridge                319488  1 br_netfilter
    
      root@node-0:~# mkdir -p /etc/containerd/
      mv containerd-config.toml /etc/containerd/config.toml
      mv containerd.service /etc/systemd/system/
      cat /etc/containerd/config.toml ; echo
      version = 2
    
      [plugins."io.containerd.grpc.v1.cri"]
        [plugins."io.containerd.grpc.v1.cri".containerd]
          snapshotter = "overlayfs"
          default_runtime_name = "runc"
        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
          runtime_type = "io.containerd.runc.v2"
        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
          SystemdCgroup = true
      [plugins."io.containerd.grpc.v1.cri".cni]
        bin_dir = "/opt/cni/bin"
        conf_dir = "/etc/cni/net.d"
    
      ###############중략############################
    
      #node0세팅확인
      root@jumpbox:~/kubernetes-the-hard-way# ssh server "kubectl get nodes -owide --kubeconfig admin.kubeconfig"
      NAME     STATUS   ROLES    AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION   CONTAINER-RUNTIME
      node-0   Ready    <none>   19s   v1.32.3   192.168.10.101   <none>        Debian GNU/Linux 12 (bookworm)   6.1.0-40-amd64   containerd://2.1.0-beta.0
  • node1 세팅(node0과 동일하기에 결과값만 반환하고 세팅 생략)

      root@jumpbox:~/kubernetes-the-hard-way# ssh server "kubectl get nodes -owide --kubeconfig admin.kubeconfig"
      NAME     STATUS   ROLES    AGE    VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION   CONTAINER-RUNTIME
      node-0   Ready    <none>   2m7s   v1.32.3   192.168.10.101   <none>        Debian GNU/Linux 12 (bookworm)   6.1.0-40-amd64   containerd://2.1.0-beta.0
      node-1   Ready    <none>   7s     v1.32.3   192.168.10.102   <none>        Debian GNU/Linux 12 (bookworm)   6.1.0-40-amd64   containerd://2.1.0-beta.0

10 - Configuring kubectl for Remote Access

  • jumpbox에 admin자격증명 세팅진행(kubectl사용)

      root@jumpbox:~/kubernetes-the-hard-way# curl -s --cacert ca.crt https://server.kubernetes.local:6443/version | jq
      {
        "major": "1",
        "minor": "32",
        "gitVersion": "v1.32.3",
        "gitCommit": "32cc146f75aad04beaaa245a7157eb35063a9f99",
        "gitTreeState": "clean",
        "buildDate": "2025-03-11T19:52:21Z",
        "goVersion": "go1.23.6",
        "compiler": "gc",
        "platform": "linux/amd64"
      }
    
      root@jumpbox:~/kubernetes-the-hard-way# kubectl config set-cluster kubernetes-the-hard-way \
        --certificate-authority=ca.crt \
        --embed-certs=true \
        --server=https://server.kubernetes.local:6443
      Cluster "kubernetes-the-hard-way" set.
      root@jumpbox:~/kubernetes-the-hard-way# kubectl config set-credentials admin \
        --client-certificate=admin.crt \
        --client-key=admin.key
      User "admin" set.
      root@jumpbox:~/kubernetes-the-hard-way# kubectl config set-credentials admin \
        --client-certificate=admin.crt \
        --client-key=admin.key
      User "admin" set.
      root@jumpbox:~/kubernetes-the-hard-way# kubectl config set-context kubernetes-the-hard-way \
        --cluster=kubernetes-the-hard-way \
        --user=admin
      Context "kubernetes-the-hard-way" created.
      root@jumpbox:~/kubernetes-the-hard-way# kubectl config use-context kubernetes-the-hard-way
      Switched to context "kubernetes-the-hard-way".
    
      root@jumpbox:~/kubernetes-the-hard-way# kubectl version
      Client Version: v1.32.3
      Kustomize Version: v5.5.0
      Server Version: v1.32.3

11 - Provisioning Pod Network Routes

  • node-0/1 pod간 통신을 위한 라우팅 수동설정진행

      root@jumpbox:~/kubernetes-the-hard-way# SERVER_IP=$(grep server machines.txt | cut -d " " -f 1)
      NODE_0_IP=$(grep node-0 machines.txt | cut -d " " -f 1)
      NODE_0_SUBNET=$(grep node-0 machines.txt | cut -d " " -f 4)
      NODE_1_IP=$(grep node-1 machines.txt | cut -d " " -f 1)
      NODE_1_SUBNET=$(grep node-1 machines.txt | cut -d " " -f 4)
      echo $SERVER_IP $NODE_0_IP $NODE_0_SUBNET $NODE_1_IP $NODE_1_SUBNET
      192.168.10.100 192.168.10.101 10.200.0.0/24 192.168.10.102 10.200.1.0/24
    
      root@jumpbox:~/kubernetes-the-hard-way# ssh server ip -c route
      default via 10.0.2.2 dev eth0
      10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15
      192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.100

12 - Smoke Test

  • Data encryption

      root@jumpbox:~/kubernetes-the-hard-way# kubectl create secret generic kubernetes-the-hard-way --from-literal="mykey=mydata"
      secret/kubernetes-the-hard-way created
    
      #정상 적용 확인
      root@jumpbox:~/kubernetes-the-hard-way# kubectl get secret kubernetes-the-hard-way -o jsonpath='{.data.mykey}' ; echo
      bXlkYXRh
    
      root@jumpbox:~/kubernetes-the-hard-way# ssh root@server \
          'etcdctl get /registry/secrets/default/kubernetes-the-hard-way | hexdump -C'
      00000000  2f 72 65 67 69 73 74 72  79 2f 73 65 63 72 65 74  |/registry/secret|
      00000010  73 2f 64 65 66 61 75 6c  74 2f 6b 75 62 65 72 6e  |s/default/kubern|
      00000020  65 74 65 73 2d 74 68 65  2d 68 61 72 64 2d 77 61  |etes-the-hard-wa|
      00000030  79 0a 6b 38 73 3a 65 6e  63 3a 61 65 73 63 62 63  |y.k8s:enc:aescbc|
      00000040  3a 76 31 3a 6b 65 79 31  3a 96 7c 14 e2 52 45 50  |:v1:key1:.|..REP|
      00000050  f0 6a a5 66 a0 ac 73 62  d2 d4 63 e0 96 7a d5 55  |.j.f..sb..c..z.U|
      00000060  ff a5 3d 3f 47 fc 53 c7  f3 90 de c2 bb e4 ae 60  |..=?G.S........`|
      00000070  d0 0f 1e 34 a5 a0 59 11  d4 f2 8a 32 21 85 5f f5  |...4..Y....2!._.|
      00000080  1c 01 f5 58 96 dc e1 cc  e4 1a 47 c7 48 26 f6 53  |...X......G.H&.S|
      00000090  b0 12 b4 8e f5 eb 8a 01  c6 a7 7c 67 78 57 9c e0  |..........|gxW..|
      000000a0  a1 06 84 67 8c 57 c4 a0  23 1c 6a d0 a2 62 8b 78  |...g.W..#.j..b.x|
      000000b0  8c dc fe 68 60 f8 8d 38  14 90 46 97 bc ae 4d d5  |...h`..8..F...M.|
      000000c0  37 76 8f a9 fd 74 b8 85  f0 09 8d d7 0c 61 3e e3  |7v...t.......a>.|
      000000d0  04 1a a8 99 80 15 45 7d  a5 41 b7 75 54 a6 e0 dc  |......E}.A.uT...|
      000000e0  0e 57 ae e7 3b 8b bd 1b  43 25 39 2e 04 4b 90 be  |.W..;...C%9..K..|
      000000f0  ab 3d d2 0c e7 9c 97 cf  2d 3d 2f 91 b9 f3 05 f6  |.=......-=/.....|
      00000100  3f 47 93 3a a8 dd e3 54  55 15 42 8a 39 45 cb 2b  |?G.:...TU.B.9E.+|
      00000110  c3 cb 2d bf df 5f 2b c4  12 58 38 11 73 6a c6 f8  |..-.._+..X8.sj..|
      00000120  f7 97 1b bd d3 e3 95 e1  f5 ef d1 fb 5e 4b 1b ab  |............^K..|
      00000130  36 22 7c 3d d0 e8 80 b4  4d 85 20 05 9f d4 c2 10  |6"|=....M. .....|
      00000140  96 23 c0 88 e3 a1 22 f7  61 cd 70 00 86 18 5c 24  |.#....".a.p...\$|
      00000150  9a a4 14 e3 4b 39 39 57  ee 0a                    |....K99W..|
      0000015a
  • Deployments , Port Forwarding , Log, Exec, Service(NodePort)

      root@jumpbox:~/kubernetes-the-hard-way# kubectl get pod
      kubectl create deployment nginx --image=nginx:latest
      kubectl scale deployment nginx --replicas=2
      kubectl get pod -owide
      No resources found in default namespace.
      deployment.apps/nginx created
      deployment.apps/nginx scaled
      NAME                     READY   STATUS              RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES
      nginx-54c98b4f84-dkn2w   0/1     ContainerCreating   0          0s    <none>   node-1   <none>           <none>
      nginx-54c98b4f84-kws4j   0/1     ContainerCreating   0          0s    <none>   node-0   <none>           <none>
      root@jumpbox:~/kubernetes-the-hard-way# kubectl get pod -owide
      NAME                     READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
      nginx-54c98b4f84-dkn2w   1/1     Running   0          44s   10.200.1.2   node-1   <none>           <none>
      nginx-54c98b4f84-kws4j   1/1     Running   0          44s   10.200.0.2   node-0   <none>           <none>
    
      # server 노드에서 파드 IP로 호출 확인
      root@jumpbox:~/kubernetes-the-hard-way# ssh server curl -s 10.200.1.2 | grep title
      <title>Welcome to nginx!</title>
      root@jumpbox:~/kubernetes-the-hard-way# ssh server curl -s 10.200.0.2 | grep title
      <title>Welcome to nginx!</title>
    
      root@jumpbox:~/kubernetes-the-hard-way# echo $POD_NAME
      nginx-54c98b4f84-dkn2w
      root@jumpbox:~/kubernetes-the-hard-way# kubectl port-forward $POD_NAME 8080:80 &
      [1] 3130
      root@jumpbox:~/kubernetes-the-hard-way# Forwarding from 127.0.0.1:8080 -> 80
      Forwarding from [::1]:8080 -> 80
    
  • 실습 환경 정리

      C:\Users\bom\Desktop\스터디\onpremisk8s>vagrant destroy -f
      ==> node-1: Forcing shutdown of VM...
      ==> node-1: Destroying VM and associated drives...
      ==> node-0: Forcing shutdown of VM...
      ==> node-0: Destroying VM and associated drives...
      ==> server: Forcing shutdown of VM...
      ==> server: Destroying VM and associated drives...
      ==> jumpbox: Forcing shutdown of VM...
      ==> jumpbox: Destroying VM and associated drives...
    
      C:\Users\bom\Desktop\스터디\onpremisk8s>rmdir /s /q .vagrant

'Study > K8S-Deploy' 카테고리의 다른 글

K8S)3주차 과제  (0) 2026.01.24
K8S)2주차 과제  (0) 2026.01.15

저자 : 게일가젤
독서일자 :  2025.05.01 ~ 2025.06.25

 

“우울, 불안, 번아웃, 스트레스에 무너지지 않는 멘탈 관리.”

이 책을 읽게 된 계기는 표지에 나와있는 바로 이 문장 때문이다. 사회생활을 하며 바쁘게 살아가는 현대인이라면 누구나 한 번쯤 겪어봤을 감정들이기에, 이 문장이 유독 마음을 끌었다.

책은 이러한 감정들을 억누르거나 회피하는 대신, 있는 그대로 받아들이고 더 나은 방향으로 나아갈 수 있는 방법을 제시한다. 특히 두려움과 같은 부정적인 감정을 무조건 밀어내는 것이 아니라, 그것을 인정하고 수용하는 자세가 중요하다는 내용이 인상 깊었다.

나 역시 사회생활을 하며 내 뜻대로 되지 않는 상황들에 종종 스트레스를 받아왔기에, 책 속에서 말하는 ‘스트레스를 최소화하는 방법들’이 현실적으로 와 닿았다. 그리고 자기 자신을 돌봐야 주변도 챙길 수 있다 는 말에는 깊이 공감했다.

결국 주변을 챙기는 것도 내가 에너지를 충분히 갖고 있을 때 가능한 일이니까.

이 책을 읽으며 일과 삶의 균형, 즉 워라밸의 중요성을 더욱 절감했다. 무엇이든 과하지 않게, 나를 소진시키지 않으면서 조화를 이루는 삶을 살아야겠다는 다짐도 하게 되었다.

'개인 > 독후감' 카테고리의 다른 글

더 빠르게 실패하기  (0) 2025.02.25
사피엔스  (1) 2025.01.10
호밀밭의 파수꾼  (0) 2024.06.18
이게 무슨 일이야!  (0) 2024.04.02
홈 in 홈  (0) 2024.03.20

Taints + Tolerations + Node Affinity

개념 요약

  • Taints: 노드에 "오염(Taint)"을 적용해, 특정 조건의 Pod가 아니면 해당 노드에 스케줄링되지 않도록 제한합니다.
  • Tolerations: Pod가 특정 Taint를 "관용(Tolerate)"할 수 있도록 설정해, 해당 노드에 스케줄링될 수 있게 허용합니다.
  • Node Affinity: 노드에 설정된 Label 기반으로 Pod가 선호하거나 반드시 배치되어야 하는 노드를 지정합니다.

한계점

  1. Taints + Tolerations의 한계
    • Taint가 설정된 노드에 대해, Toleration이 없는 Pod는 스케줄링되지 않지만, Toleration이 있는 Pod는 Taint가 없는 일반 노드에도 스케줄링될 수 있습니다.
      • ❗즉, 특정 노드에만 배치되기를 원하지만, 다른 일반 노드에 배치될 수 있는 가능성 존재
      • 예: Toleration: color=blue인 Pod는 color=blue Taint가 있는 노드에도, 아무 설정 없는 노드에도 스케줄될 수 있음
  2. Node Affinity의 한계
    • Node Affinity는 특정 조건(label)에 맞는 노드에 스케줄을 유도할 수 있지만,
    • Affinity 조건이 없는 Pod는 해당 노드에 스케줄링이 가능함 (차단 기능이 아님)
    • 또한 preferredDuringScheduling는 단순 선호이므로 강제성이 없음

완벽한 제어 방법

  • 세 가지를 조합해야 특정 노드에만 특정 Pod가 스케줄되도록 완벽한 제어가 가능:

기능 역할

Taint (노드) 노드에 "이 Pod 아니면 거절" 조건 부여 (NoSchedule, PreferNoSchedule 등)
Toleration (Pod) Taint 조건을 만족하는 Pod만 스케줄 가능하게 허용
Node Affinity (Pod) 특정 라벨이 있는 노드에만 스케줄되도록 제어

✅ 예시 구성

  • 노드 설정:
    • node1 → Label: node-type=blue, Taint: key=color:blue:NoSchedule
  • Pod 설정:
    • Toleration: key=color, value=blue, effect=NoSchedule
    • Node Affinity: requiredDuringSchedulingIgnoredDuringExecution → node-type=blue

→ 이렇게 설정하면:

  • 해당 Pod는 node-type=blue인 노드에만 스케줄링되고,
  • color=blue Taint가 없으면 거절되므로,
  • 해당 조건을 모두 만족하는 노드에만 Pod가 배치됨

🔚 결론

  • Taints + Tolerations은 접근을 차단/허용하는 장치
  • Node Affinity배치 조건을 지정하는 장치
  • 이 둘은 상호보완적으로 사용되어야 완전한 스케줄링 제어가 가능

+ Recent posts