실습 환경 구성

  • 실습환경배포
 ##파일 다운로드
  /drives/c/Users/bom/Desktop/스터디/ansible 
  13/01/2026   23:05.21  curl -O https://raw.githubusercontent.com/gasida/                                                        vagrant-lab/refs/heads/main/ansible/Vagrantfile
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  2458  100  2458    0     0   8081      0 --:--:-- --:--:-- --:--:--  8112
                                                                               ✓

  /drives/c/Users/bom/Desktop/스터디/ansible 
  13/01/2026   23:05.27  curl -O https://raw.githubusercontent.com/gasida/                                                        vagrant-lab/refs/heads/main/ansible/init_cfg.sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1072  100  1072    0     0   3975      0 --:--:-- --:--:-- --:--:--  3985
                                                                               ✓

  /drives/c/Users/bom/Desktop/스터디/ansible 
  13/01/2026   23:05.32 
                                                                               ✓

  /drives/c/Users/bom/Desktop/스터디/ansible 
  13/01/2026   23:05.32  curl -O https://raw.githubusercontent.com/gasida/                                                        vagrant-lab/refs/heads/main/ansible/init_cfg2.sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100  1074  100  1074    0     0   3781      0 --:--:-- --:--:-- --:--:--  3795
                                                                               ✓                                     
#파일 다운로드 후 vagrant up
PS C:\Users\bom\Desktop\스터디\ansible> vagrant up
Bringing machine 'server' up with 'virtualbox' provider...
Bringing machine 'tnode1' up with 'virtualbox' provider...
Bringing machine 'tnode2' up with 'virtualbox' provider...
Bringing machine 'tnode3' up with 'virtualbox' provider...
==> server: Preparing master VM for linked clones...
    server: This is a one time operation. Once the master VM is prepared,
    server: it will be used as a base for linked clones, making the creation
    server: of new VMs take milliseconds on a modern system.
==> server: Importing base box 'bento/ubuntu-24.04'...
==> server: Cloning VM...
==> server: Matching MAC address for NAT networking...
==> server: Checking if box 'bento/ubuntu-24.04' version '202510.26.0' is up to date...
==> server: Setting the name of the VM: server
==> server: Clearing any previously set network interfaces...
==> server: Preparing network interfaces based on configuration...
    server: Adapter 1: nat
    server: Adapter 2: hostonly
==> server: Forwarding ports...
    server: 22 (guest) => 60000 (host) (adapter 1)
==> server: Running 'pre-boot' VM customizations...
==> server: Booting VM...
==> server: Waiting for machine to boot. This may take a few minutes...
#############################중략######################################
  • 기본 정보 확인

      root@server:~# hostnamectl
       Static hostname: server
             Icon name: computer-vm
               Chassis: vm 🖴
            Machine ID: 47e27b291d4344f995b31c7bd9d79f92
               Boot ID: 48aba56a31cf4d90b8a573895da77ea2
        Virtualization: oracle
      Operating System: Ubuntu 24.04.3 LTS
                Kernel: Linux 6.8.0-86-generic
          Architecture: x86-64
       Hardware Vendor: innotek GmbH
        Hardware Model: VirtualBox
      Firmware Version: VirtualBox
         Firmware Date: Fri 2006-12-01
          Firmware Age: 19y 1month 1w 6d
    
      #통신확인
      root@server:~# for i in {1..3}; do ping -c 1 tnode$i; done
      PING tnode1 (10.10.1.11) 56(84) bytes of data.
      64 bytes from tnode1 (10.10.1.11): icmp_seq=1 ttl=64 time=1.12 ms
    
      --- tnode1 ping statistics ---
      1 packets transmitted, 1 received, 0% packet loss, time 0ms
      rtt min/avg/max/mdev = 1.116/1.116/1.116/0.000 ms
      PING tnode2 (10.10.1.12) 56(84) bytes of data.
      64 bytes from tnode2 (10.10.1.12): icmp_seq=1 ttl=64 time=2.13 ms
    
      --- tnode2 ping statistics ---
      1 packets transmitted, 1 received, 0% packet loss, time 0ms
      rtt min/avg/max/mdev = 2.134/2.134/2.134/0.000 ms
      PING tnode3 (10.10.1.13) 56(84) bytes of data.
      64 bytes from tnode3 (10.10.1.13): icmp_seq=1 ttl=64 time=1.52 ms
    
      --- tnode3 ping statistics ---
      1 packets transmitted, 1 received, 0% packet loss, time 0ms
      rtt min/avg/max/mdev = 1.519/1.519/1.519/0.000 ms

Ansible소개

  • 앤서블이란 ? Iac기반의 오픈소스 IT 자동화도구로서 코드기반으로 인프라관리를 할수있게끔 지원해주는 도구이다.

    • 특징은 별도의 에이전트를 설치하지않는 Agentless방식이며, 멱등성, 그리고 다양한모듈을 제공한다는 특징을 가지고 있다.
  • 앤서블 구성:

    *제어 노드 : 앤서블이 설치되는 노드이며, 운영체제가 리눅스면 제어 노드가 될 수 있다.

    *관리 노드 : 앤서블이 제어하는 원격 시스템 또는 호스트를 의미한다. 별도 에이전트가 없으므로, SSH통신이 가능해야하며, 파이썬이 설치되어있어야한다.

    *인벤토리 : 제어노드가 관리하는 관리노드의 목록을 나열해놓은 파일을 의미.

    *모듈: 앤서블은 관리 노드의 작업을 수행할 때 SSH를 통해 연결한 후 사용용도에 따른 모듈을 푸시하여 사용한다.

    *플레이북 : 플레이북은 관리 노드에서 수행할 작업들을 YAML 문법을 이용해 순서대로 작성해놓은 파일입니다.

Ansible 설치

  • ansible-server에 앤서블 설치진행

      root@server:~# apt install software-properties-common -y
      Reading package lists... Done
      Building dependency tree... Done
      Reading state information... Done
      software-properties-common is already the newest version (0.99.49.3).
      0 upgraded, 0 newly installed, 0 to remove and 89 not upgraded.
      root@server:~# add-apt-repository --yes --update ppa:ansible/ansible
      Repository: 'Types: deb
      URIs: https://ppa.launchpadcontent.net/ansible/ansible/ubuntu/
      Suites: noble
      Components: main
      '
      Description:
      Ansible is a radically simple IT automation platform that makes your applications and systems easier to deploy. Avoid writing scripts or custom code to deploy and update your applications— automate in a language that approaches plain English, using SSH, with no agents to install on remote systems.
    
      http://ansible.com/
    
      If you face any issues while installing Ansible PPA, file an issue here:
      https://github.com/ansible-community/ppa/issues
      More info: https://launchpad.net/~ansible/+archive/ubuntu/ansible
      Adding repository.
      Get:1 http://security.ubuntu.com/ubuntu noble-security InRelease [126 kB]
      Get:2 https://ppa.launchpadcontent.net/ansible/ansible/ubuntu noble InRelease [17.8 kB]
      Hit:3 http://us.archive.ubuntu.com/ubuntu noble InRelease
      Get:4 https://ppa.launchpadcontent.net/ansible/ansible/ubuntu noble/main amd64 Packages [772 B]
      Hit:5 http://us.archive.ubuntu.com/ubuntu noble-updates InRelease
      Get:6 https://ppa.launchpadcontent.net/ansible/ansible/ubuntu noble/main Translation-en [472 B]
      Hit:7 http://us.archive.ubuntu.com/ubuntu noble-backports InRelease
      Fetched 145 kB in 2s (69.9 kB/s)
      Reading package lists... Done
      root@server:~# apt install ansible -y
      Reading package lists... Done
      Building dependency tree... Done
      Reading state information... Done
      ##################중략################################
    
      root@server:~# which ansible
      /usr/bin/ansible
      root@server:~# mkdir my-ansible
      root@server:~# cd my-ansible/
      root@server:~/my-ansible#
  • 앤서블 접근을 위한 SSH인증구성

      root@server:~/my-ansible# ssh-keygen -t rsa -N "" -f /root/.ssh/id_rsa
      Generating public/private rsa key pair.
      Your identification has been saved in /root/.ssh/id_rsa
      Your public key has been saved in /root/.ssh/id_rsa.pub
      The key fingerprint is:
      SHA256:6c84qKYYcxZjFJPTMg03FVUF3/2kTXqKh77YBKp2qx8 root@server
      The key's randomart image is:
      +---[RSA 3072]----+
      |  +=o.oo..oo.    |
      |  =+o.     . . . |
      |  .+        . . +|
      | .       .     *.|
      |  +     S .   o +|
      | . o   . . . o o |
      |o o    .E   + o  |
      | *  . o.o= = .   |
      |. .o.oo++o+ +.   |
      +----[SHA256]-----+
      root@server:~/my-ansible# for i in {1..3}; do sshpass -p 'qwe123' ssh-copy-id -o StrictHostKeyChecking=no root@tnode$i; done
      root@server:~/my-ansible# for i in {1..3}; do echo ">> tnode$i <<"; ssh tnode$i hostname; echo; done
      >> tnode1 <<
      tnode1
      >> tnode2 <<
      tnode2
      >> tnode3 <<
      tnode3
    
      root@server:~/my-ansible# for i in {1..3}; do echo ">> tnode$i <<"; ssh tnode$i python3 -V; echo; done
      >> tnode1 <<
      Python 3.12.3
    
      >> tnode2 <<
      Python 3.12.3
    
      >> tnode3 <<
      Python 3.9.21
  • VS Code SSH연결

      # Read more about SSH config files: https://linux.die.net/man/5/ssh_config
      Host ansible-server
          HostName 10.10.1.10
          User root   
    

    image.png

    image.png

호스트 선정

  • 인벤토리를 이용한 자동화대상 관리 노드 설정

    인벤토리 파일은 텍스트 파일이며, 앤서블이 자동화 대상으로 하는 관리 노드지정합니다.

    이 파일은 INI ,YAML을 포함한 다양한 형식을 사용하여 작성할 수 있습니다

  • 다양한 인벤토리 파일 생성방법

    • 호스트 정의: 관리할 서버들을 IP 주소나 호스트명으로 나열하여 기본 목록을 생성합니다.
    • 그룹화: 대괄호([group])를 사용해 웹, DB 등 역할별로 묶어 효율적으로 관리할 수 있습니다.
    • 유연한 구성: 한 호스트가 여러 그룹에 속할 수 있어 위치, 환경(운영/개발) 등 다각도 분류가 가능합니다.
    • 중첩 그룹: :children 접미사로 기존 그룹들을 하위 그룹으로 포함하는 상위 그룹을 정의할 수 있습니다.
    • 범위 지정: [1:10]과 같은 문법을 사용해 유사한 이름의 호스트들을 한 줄로 간결하게 표현합니다.
  • 실습을 위한 인벤토리 그룹 구성

      root@server:~/my-ansible# cat <<EOT > inventory
      [web]
      tnode1
      tnode2
    
      [db]
      tnode3
    
      [all:children]
      web
      db
      EOT
      root@server:~/my-ansible# ansible-inventory -i ./inventory --list | jq
      {
        "_meta": {
          "hostvars": {},
          "profile": "inventory_legacy"
        },
        "all": {
          "children": [
            "ungrouped",
            "web",
            "db"
          ]
        },
        "db": {
          "hosts": [
            "tnode3"
          ]
        },
        "web": {
          "hosts": [
            "tnode1",
            "tnode2"
          ]
        }
      }
      root@server:~/my-ansible# 

    (참고사항) ansible.config의 적용 우선순위.

    1. ANSIBLE_CONFIG (environment variable if set)
    2. ansible.cfg (in the current directory)
    3. ~/.ansible.cfg (in the home directory)
    4. /etc/ansible/ansible.cfg

플레이북 작성

  • 앤서블 환경설정 하는이유

    • 구성 체계: ansible.cfg[defaults][privilege_escalation] 섹션으로 나뉘어 앤서블의 동작 방식을 결정합니다.
    • 기본 접속 설정: [defaults]에서는 관리 호스트의 인벤토리 경로, 접속 계정명, SSH 암호 확인 여부 등 핵심 연결 정보를 정의합니다.
    • 권한 상승 관리: [privilege_escalation]은 접속 후 sudo 등을 통해 관리자(root) 권한을 자동으로 얻는 방법과 절차를 설정합니다.
    • 효율성 향상: 설정 파일을 활용하면 실행 시마다 번거로운 옵션을 입력할 필요 없이 일관성 있는 자동화 환경을 구축할 수 있습니다.
  • 실습을 위한 앤서블 환경설정

      root@server:~/my-ansible# cat <<EOT > **ansible.cfg**
      **[defaults]**
      inventory = ./inventory
      remote_user = **root**
      ask_pass = false
    
      **[privilege_escalation]**
      become = true
      become_method = **sudo**
      become_user = root
      become_ask_pass = false
      EOT
  • 플레이북 작성하기

    • 플레이북 작성 및 문법체크 진행

      vi my-ansible/first-playbook.yml

    • hosts: all
      tasks:
      • name: Print message
        debug:
        msg: Hello CloudNet@ Ansible Study
root@server:~/my-ansible# ansible-playbook --syntax-check first-playbook.yml 

playbook: first-playbook.yml


root@server:~/my-ansible# ansible-playbook --syntax-check first-playbook-wth-error.yml 
[ERROR]: conflicting action statements: debug, msg
Origin: /root/my-ansible/first-playbook-wth-error.yml:4:7

2 - hosts: all
3   tasks:
4     - name: Print message
        ^ column 7
```

- 플레이북 실행하기

```bash
root@server:~/my-ansible# ansible-playbook first-playbook.yml 

PLAY [all] **************************************************************************************

TASK [Gathering Facts] **************************************************************************
ok: [tnode2]
ok: [tnode1]
ok: [tnode3]

TASK [Print message] ****************************************************************************
ok: [tnode1] => {
    "msg": "Hello CloudNet@ Ansible Study"
}
ok: [tnode2] => {
    "msg": "Hello CloudNet@ Ansible Study"
}
ok: [tnode3] => {
    "msg": "Hello CloudNet@ Ansible Study"
}

PLAY RECAP **************************************************************************************
tnode1                     : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
tnode2                     : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
tnode3                     : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0 
```

- state  멱등성을 보장하기위한 상태값.
    - **started** : `Start service httpd, if not started`
    - **stooped** : `Stop service httpd, if started`
    - **restarted** : `Restart service httpd, in all cases`
    - **reloaded** : `Reload service httpd, in all cases`
- ssh재기동 플레이북 수행

```bash
vi my-ansible/**restart-service.yml**
---
- hosts: all
  tasks:
    - name: Restart sshd service
      ansible.builtin.**service**:
        name: ssh # sshd
        state: **restarted**

root@server:~/my-ansible# ansible-playbook --check restart-service.yml

PLAY [all] **************************************************************************************

TASK [Gathering Facts] **************************************************************************
ok: [tnode3]
ok: [tnode1]
ok: [tnode2]

TASK [Restart sshd service] *********************************************************************
changed: [tnode1]
changed: [tnode2]
[ERROR]: Task failed: Module failed: Could not find the requested service ssh: host
Origin: /root/my-ansible/restart-service.yml:4:7

2 - hosts: all
3   tasks:
4     - name: Restart sshd service
        ^ column 7

fatal: [tnode3]: FAILED! => {"changed": false, "msg": "Could not find the requested service ssh: host"}

PLAY RECAP **************************************************************************************
tnode1                     : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
tnode2                     : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
tnode3                     : ok=1    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0 

 #다만, Redhat에서는 ssh가아닌 sshd이므로 정상수행이안된다.
```

변수

  • 변수 우선 순위 : 추가변수(실행 시 파라미터) > 플레이 변수 > 호스트 변수 > 그룹 변수

  • 그룹 변수 실습

      #인벤토리 파일 하단에 선언된 변수를 의미하며 인벤토리에서 선언후 플레이북에서 활용한다.
    
      root@server:~/my-ansible# cat inventory 
      [web]
      tnode1
      tnode2
    
      [db]
      tnode3
    
      [all:children]
      web
      db
    
      **[all:vars]
      user=ansible
      ##
      root@server:~/my-ansible# cat create-user.yml 
      ---
    
      - hosts: all
        tasks:
        - name: Create User {{ user }}
          ansible.builtin.user:
            name: "{{ user }}"
            state: present
    
    
root@server:~/my-ansible# ansible-playbook create-user.yml

PLAY [all] **************************************************************************************

TASK [Gathering Facts] **************************************************************************
ok: [tnode2]
ok: [tnode3]
ok: [tnode1]

TASK [Create User ansible] **********************************************************************
changed: [tnode1]
changed: [tnode2]
changed: [tnode3]

PLAY RECAP **************************************************************************************
tnode1                     : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
tnode2                     : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
tnode3                     : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

##멱등성 확인
TASK [Create User ansible] **********************************************************************
ok: [tnode1]
ok: [tnode2]
ok: [tnode3]**   
```
  • 호스트 변수 실습

      #말그대로 호스트에서만 사용이 가능하다.
    
      root@server:~/my-ansible# vi inventory 
    
      [web]
      tnode1 ansible_python_interpreter=/usr/bin/python3
      tnode2 ansible_python_interpreter=/usr/bin/python3
    
      [db]
      tnode3 ansible_python_interpreter=/usr/bin/python3 **user=ansible1**
    
      [all:children]
      web
      db
    
      [all:vars]
      user=ansible
    
      root@server:~/my-ansible# for i in {1..3}; do echo ">> tnode$i <<"; ssh tnode$i tail -n 3 /etc/passwd; echo; done
      >> tnode1 <<
      vagrant:x:1000:1000:vagrant:/home/vagrant:/bin/bash
      vboxadd:x:999:1::/var/run/vboxadd:/bin/false
      ansible:x:1001:1001::/home/ansible:/bin/sh
    
      >> tnode2 <<
      vagrant:x:1000:1000:vagrant:/home/vagrant:/bin/bash
      vboxadd:x:999:1::/var/run/vboxadd:/bin/false
      ansible:x:1001:1001::/home/ansible:/bin/sh
    
      >> tnode3 <<
      vboxadd:x:991:1::/var/run/vboxadd:/bin/false
      ansible:x:1001:1001::/home/ansible:/bin/bash
      ansible1:x:1002:1002::/home/ansible1:/bin/bash
  • 플레이 변수

      #플레이북에서 선언하는 변수를 의미
      root@server:~/my-ansible# cat create-user2.yml 
      ---
    
      - hosts: all
        **vars:
          user: ansible2**
    
        tasks:
        - name: Create User {{ user }}
          ansible.builtin.user:
            name: "{{ user }}"
            state: present
    
      root@server:~/my-ansible# for i in {1..3}; do echo ">> tnode$i <<"; ssh tnode$i tail -n 3 /etc/passwd; echo; done
      >> tnode1 <<
      vboxadd:x:999:1::/var/run/vboxadd:/bin/false
      ansible:x:1001:1001::/home/ansible:/bin/sh
      **ansible2:x:1002:1002::/home/ansible2:/bin/sh**
    
      >> tnode2 <<
      vboxadd:x:999:1::/var/run/vboxadd:/bin/false
      ansible:x:1001:1001::/home/ansible:/bin/sh
      **ansible2:x:1002:1002::/home/ansible2:/bin/sh**
    
      >> tnode3 <<
      ansible:x:1001:1001::/home/ansible:/bin/bash
      ansible1:x:1002:1002::/home/ansible1:/bin/bash
      **ansible2:x:1003:1003::/home/ansible2:/bin/bash**
    
      #아래와 같은 형태로 vars를 별도관리해서 사용가능
      root@server:~/my-ansible# tree vars
      vars
      └── users.yml
    
  • 추가 변수

      #직접 명령어 수행시 입력하는 변수를 의미한다.
    
      root@server:~/my-ansible# ansible-playbook **-e user=ansible4** create-user3.yml
    
      root@server:~/my-ansible# for i in {1..3}; do echo ">> tnode$i <<"; ssh tnode$i tail -n 5 /etc/passwd; echo; done
      >> tnode1 <<
      vboxadd:x:999:1::/var/run/vboxadd:/bin/false
      ansible:x:1001:1001::/home/ansible:/bin/sh
      ansible2:x:1002:1002::/home/ansible2:/bin/sh
      ansible3:x:1003:1003::/home/ansible3:/bin/sh
      ansible4:x:1004:1004::/home/ansible4:/bin/sh
    
      >> tnode2 <<
      vboxadd:x:999:1::/var/run/vboxadd:/bin/false
      ansible:x:1001:1001::/home/ansible:/bin/sh
      ansible2:x:1002:1002::/home/ansible2:/bin/sh
      ansible3:x:1003:1003::/home/ansible3:/bin/sh
      ansible4:x:1004:1004::/home/ansible4:/bin/sh
    
      >> tnode3 <<
      ansible:x:1001:1001::/home/ansible:/bin/bash
      ansible1:x:1002:1002::/home/ansible1:/bin/bash
      ansible2:x:1003:1003::/home/ansible2:/bin/bash
      ansible3:x:1004:1004::/home/ansible3:/bin/bash
      ansible4:x:1005:1005::/home/ansible4:/bin/bash
  • 작업변수 ansible.builtin.debug모듈

      #플레이북의 수행결과를 저장한다 register: result문구를 통해확인
    
      root@server:~/my-ansible# ansible-playbook -e user=test create-user.yml 
    
      PLAY [db] ***************************************************************************************
    
      TASK [Gathering Facts] **************************************************************************
      ok: [tnode3]
    
      TASK [Create User test] *************************************************************************
      changed: [tnode3]
    
      TASK [ansible.builtin.debug] ********************************************************************
      ok: [tnode3] => {
          "result": {
              "changed": true,
              "comment": "",
              "create_home": true,
              "failed": false,
              "group": 1006,
              "home": "/home/test",
              "name": "test",
              "shell": "/bin/bash",
              "state": "present",
              "system": false,
              "uid": 1006
          }
      }
    

Facts

앤서블이 관리 노드에서 수집하는 변수를의미하며 변수종류에는 아래와 같은 종류가 수집된다.

  • 호스트 이름

  • 커널 버전

  • 네트워크 인터페이스 이름

  • 운영체제 버전

  • CPU 개수

  • 사용 가능한 메모리

  • 스토리지 장치의 크기 및 여유 공간

  • 등등…

  • 팩트 사용하기

      root@server:~/my-ansible# ansible-playbook facts.yml 
    
      PLAY [db] ***************************************************************************************
    
      TASK [Gathering Facts] **************************************************************************
      ok: [tnode3]
    
      TASK [Print all facts] **************************************************************************
      ok: [tnode3] => {
          "ansible_facts": {
              "all_ipv4_addresses": [
                  "10.0.2.15",
                  "10.10.1.13"
              ],
              "all_ipv6_addresses": [
                  "fd17:625c:f037:2:a00:27ff:fe6f:16b4",
                  "fe80::a00:27ff:fe6f:16b4",
                  "fe80::a00:27ff:feba:8927"
              ],
              "ansible_local": {},
              "apparmor": {
                  "status": "disabled"
              },
              "architecture": "x86_64",
              "bios_date": "12/01/2006",
              "bios_vendor": "innotek GmbH",
              "bios_version": "VirtualBox",
              "board_asset_tag": "NA",
              "board_name": "VirtualBox",
              "board_serial": "0",
              "board_vendor": "Oracle Corporation",
              "board_version": "1.2",
              "chassis_asset_tag": "NA",
              "chassis_serial": "NA",
              "chassis_vendor": "Oracle Corporation",
              "chassis_version": "NA",
    
       ######################중략#################################################
    • 특정 정보만 수집하기

        ---
      
        - hosts: **db**
      
          tasks:
          - name: **Print all facts**
            ansible.builtin.**debug**:
              msg: >
                The default IPv4 address of **{{ ansible_facts.hostname }}**
                is **{{ ansible_facts.default_ipv4.address }}
      
        root@server:~/my-ansible# ansible-playbook facts1.yml 
      
        PLAY [db] ***************************************************************************************
      
        TASK [Gathering Facts] **************************************************************************
        ok: [tnode3]
      
        TASK [Print all facts] **************************************************************************
        ok: [tnode3] => {
            "msg": "The default IPv4 address of tnode3 is 10.0.2.15\n"
        }
      
        PLAY RECAP **************************************************************************************
        tnode3                     : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescu**
    • 변수로 사용할 수 있는 앤서블팩트(구표기법은제외)

    | 팩트 | ansible_facts.* 표기법 |
    | --- | --- |
    | 호스트명 | ansible_facts.**hostname** |
    | 도메인 기반 호스트명 | ansible_facts.**fqdn** |
    | 기본 IPv4 주소 | ansible_facts.**default_ipv4.address** |
    | 네트워크 인터페이스 이름 목록 | ansible_facts.**interfaces** |
    | 디스크 파티션 크기 | ansible_facts.**device.vda.partitions.vda1.size** |
    | DNS 서버 | ansible_facts.**dns.nameservers** |
    | 현재 실행 커널 버전 | ansible_facts.**kernel** |
    | 운영체제 종류 | ansible_facts.**distribution** |
  • 팩트 수집 끄기

      ---
    
      - hosts: db
        **gather_facts: no**
    
        tasks:
        **- name: Print message
          debug:
            msg: Hello Ansible World
    
    
root@server:~/my-ansible# ansible-playbook facts3.yml 

PLAY [db] ***************************************************************************************
############Fact수집없음#######################
TASK [Print message] ****************************************************************************
ok: [tnode3] => {
    "msg": "Hello Ansible World"
}

PLAY RECAP **************************************************************************************
tnode3                     : ok=1    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0**   

```
  • 팩트 캐싱

      #ansible.cfg에 아래와 같이 적용 영구저장을 위해 File , DB저장도 가능
      [defaults]
      inventory = ./inventory
      remote_user = root
      ask_pass = false
      **gathering = smart
      fact_caching = jsonfile
      fact_caching_connection = myfacts**
    
      [privilege_escalation]
      become = true
      become_method = sudo
      become_user = root
      become_ask_pass = false
    
      #- DEFAULT_GATHERING(수집 정책 3가지) : implicit(기본값), explicit,smart 
      #- fact_caching_connection : 연결 정의 or **캐싱 경로**
  • 도전과제

      ---
      - hosts: all
        gather_facts: yes
    
        tasks:
          - name: Print kernel and distribution
            ansible.builtin.debug:
              msg:
                - "Kernel: {{ ansible_facts.kernel }}"
                - "Distribution: {{ ansible_facts.distribution }}"
    
      TASK [Print kernel and distribution] ************************************************************
      ok: [tnode1] => {
          "msg": [
              "Kernel: 6.8.0-86-generic",
              "Distribution: Ubuntu"
          ]
      }
      ok: [tnode2] => {
          "msg": [
              "Kernel: 6.8.0-86-generic",
              "Distribution: Ubuntu"
          ]
      }
      ok: [tnode3] => {
          "msg": [
              "Kernel: 5.14.0-570.52.1.el9_6.x86_64",
              "Distribution: Rocky"
          ]
      }

반복문

  • 단순반복문

      ---
      - hosts: **all**
        tasks:
        - name: Check sshd and rsyslog state
          **ansible.builtin.service:
            name: "{{ item }}"
            state: started
          loop:
            - vboxadd-service**  # ssh
            **- rsyslog
    
      root@server:~/my-ansible# ansible-playbook loop.yml 
    
      PLAY [all] **************************************************************************************
    
      TASK [Gathering Facts] **************************************************************************
      ok: [tnode3]
      ok: [tnode2]
      ok: [tnode1]
    
      TASK [Check sshd and rsyslog state] *************************************************************
      ok: [tnode1] => (item=vboxadd-service)
      ok: [tnode2] => (item=vboxadd-service)
      ok: [tnode3] => (item=vboxadd-service)
      ok: [tnode1] => (item=rsyslog)
      ok: [tnode2] => (item=rsyslog)
      ok: [tnode3] => (item=rsyslog)
    
      PLAY RECAP **************************************************************************************
      tnode1                     : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
      tnode2                     : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
      tnode3                     : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0**   
    
    
    
```
  • 사전 목록에 의한 반복문

    ** builtin.file모듈 파일모듈도 상당히 많이쓰는 모듈중하나로 파일 및 파일 속성 관리 모듈이다

      ---
      - hosts: **all**
    
        tasks:
          - name: Create files
            ansible.builtin.**file**:
              path: "**{{ item['log-path'] }}**"
              mode: "**{{ item['log-mode'] }}**"
              state: touch
            **loop**:
              - log-path: /var/log/test1.log
                log-mode: '0644'
              - log-path: /var/log/test2.log
                log-mode: '0600'
    
      root@server:~/my-ansible# ansible-playbook make-file.yml 
    
      PLAY [all] **************************************************************************************
    
      TASK [Gathering Facts] **************************************************************************
      ok: [tnode3]
      ok: [tnode2]
      ok: [tnode1]
    
      TASK [Create files] *****************************************************************************
      changed: [tnode1] => (item={'log-path': '/var/log/test1.log', 'log-mode': '0644'})
      changed: [tnode2] => (item={'log-path': '/var/log/test1.log', 'log-mode': '0644'})
      changed: [tnode3] => (item={'log-path': '/var/log/test1.log', 'log-mode': '0644'})
      changed: [tnode1] => (item={'log-path': '/var/log/test2.log', 'log-mode': '0600'})
      changed: [tnode2] => (item={'log-path': '/var/log/test2.log', 'log-mode': '0600'})
      changed: [tnode3] => (item={'log-path': '/var/log/test2.log', 'log-mode': '0600'})
    
      PLAY RECAP **************************************************************************************
      tnode1                     : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
      tnode2                     : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
      tnode3                     : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
  • 반복문 과 Register변수 사용

    Register변수를 사용해 작업의 출력을 캡쳐

      ---
      - hosts: localhost
        tasks:
          - name: Loop echo test
            ansible.builtin.**shell**: "echo 'I can speak {{ item }}'"
            **loop**:
              - Korean
              - English
            **register**: **result**
    
          - name: Show result
            ansible.builtin.**debug**:
              **var: result**
    
      root@server:~/my-ansible# ansible-playbook loop_register.yml 
    
      PLAY [localhost] ********************************************************************************
    
      TASK [Gathering Facts] **************************************************************************
      ok: [localhost]
    
      TASK [Loop echo test] ***************************************************************************
      changed: [localhost] => (item=Korean)
      changed: [localhost] => (item=English)
    
      TASK [Show result] ******************************************************************************
      ok: [localhost] => {
          "result": {
              "changed": true,
              "msg": "All items completed",
              "results": [
                  {
                      "ansible_loop_var": "item",
                      "changed": true,
                      "cmd": "echo 'I can speak Korean'",
                      "delta": "0:00:00.005160",
                      "end": "2026-01-14 23:37:54.886917",
                      "failed": false,
                      "invocation": {
                          "module_args": {
                              "_raw_params": "echo 'I can speak Korean'",
      ############################중략#########################################
                      ]
                  }
              ],
              "skipped": false
          }
      }
    
      PLAY RECAP **************************************************************************************
      localhost                  : ok=3    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0  
      ---
      - hosts: **localhost**
        tasks:
          - name: Loop echo test
            ansible.builtin.**shell**: "echo 'I can speak {{ item }}'"
            **loop**:
              - Korean
              - English
            **register**: result
    
          - name: Show result
            ansible.builtin.**debug**:
              **msg**: "Stdout: {{ item.**stdout** }}"
            **loop**: "{{ result.results }}"
    
      TASK [Show result] ******************************************************************************
      ok: [localhost] => (item={'changed': True, 'stdout': 'I can speak Korean', 'stderr': '', 'rc': 0, 'cmd': "echo 'I can speak Korean'", 'start': '2026-01-14 23:40:03.964047', 'end': '2026-01-14 23:40:03.969023', 'delta': '0:00:00.004976', 'msg': '', 'invocation': {'module_args': {'_raw_params': "echo 'I can speak Korean'", '_uses_shell': True, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'cmd': None, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['I can speak Korean'], 'stderr_lines': [], 'failed': False, 'item': 'Korean', 'ansible_loop_var': 'item'}) => {
          "msg": "Stdout: I can speak Korean"
      }
      ok: [localhost] => (item={'changed': True, 'stdout': 'I can speak English', 'stderr': '', 'rc': 0, 'cmd': "echo 'I can speak English'", 'start': '2026-01-14 23:40:04.174039', 'end': '2026-01-14 23:40:04.179185', 'delta': '0:00:00.005146', 'msg': '', 'invocation': {'module_args': {'_raw_params': "echo 'I can speak English'", '_uses_shell': True, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'cmd': None, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['I can speak English'], 'stderr_lines': [], 'failed': False, 'item': 'English', 'ansible_loop_var': 'item'}) => {
          "msg": "Stdout: I can speak English"
      }
      |
    • rc: ‘return code’ 반환코드

      반환 코드 의미
      0 성공
      1 일반 오류
      2 잘못된 인자
      126 실행 권한 없음
      127 명령 없음
      130 Ctrl+C 종료
      137 SIGKILL
      139 Segmentation fault

조건문

  • 조건작업 구문

    when문은 조건부로 작업을 실행할때 조건으로 값을 사용한다.

      ---
      - hosts: **localhost**
        vars:
          run_my_task: **true**
    
        tasks:
        - name: echo message
          ansible.builtin.**shell**: "echo test"
          **when: run_my_task**
          register: result
    
        - name: Show result
          ansible.builtin.**debug**:
            var: result
    
       root@server:~/my-ansible# ansible-playbook when_task.yml 
    
      PLAY [localhost] ********************************************************************************
    
      TASK [Gathering Facts] **************************************************************************
      ok: [localhost]
    
      **TASK [echo message] *****************************************************************************
      changed: [localhost]**
    
      TASK [Show result] ******************************************************************************
      ok: [localhost] => {
          "result": {
              "changed": true,
              "cmd": "echo test",
              "delta": "0:00:00.004151",
              "end": "2026-01-14 23:49:23.953038",
              "failed": false,
              "msg": "",
              "rc": 0,
              "start": "2026-01-14 23:49:23.948887",
              "stderr": "",
              "stderr_lines": [],
              "stdout": "test",
              "stdout_lines": [
                  "test"
              ]
          }
      }
    
      PLAY RECAP **************************************************************************************
      localhost                  : ok=3    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
      #false로 수행시 echo message는 건너뜀
    
      TASK [echo message] *****************************************************************************
      skipping: [localhost]
    
  • 조건연산자

    when절에 bool변수(true,false)외에 조건연산자도 사용이 가능하다.

    연산 예시 설명
    ansible_facts[’machine’] == “x86_64” ansible_facts[’machine’] 값이 x86_64와 같으면 true
    max_memory == 512 max_memory 값이 512와 같다면 true
    min_memory < 128 min_memory 값이 128보다 작으면 true
    min_memory > 256 min_memory 값이 256보다 크면 true
    min_memory <= 256 min_memory 값이 256보다 작거나 같으면 true
    min_memory >= 512 min_memory 값이 512보다 크거나 같으면 true
    min_memory != 512 min_memory 값이 512와 같지 않으면 true
    min_memory is defined min_memory 라는 변수가 있으면 true
    min_memory is not defined min_memory 라는 변수가 없으면 true
    memory_available memory 값이 true이며 true, 이때 해당 값이 1이거나 True 또는 yes면 true
    not memory_available memory 값이 false이며 true, 이때 해당 값이 0이거나 False 또는 no면 true
    ansible_facts[’distribution’] in supported_distros ansible_facts[’distribution’]의 값이 supported_distros 라는 변수에 있으면 true
      ---
      - hosts: all
        **vars**:
          supported_distros:
            - Ubuntu
            - CentOS
    
        tasks:
          - name: Print supported os
            ansible.builtin.debug:
              msg: "This {{ ansible_facts['distribution'] }} need to use **apt**"
            when: ansible_facts['distribution'] **in** supported_distros
    
    
TASK [print supported os] ***********************************************************************
ok: [tnode1] => {
    "msg": "This Ubuntu need to use apt"
}
ok: [tnode2] => {
    "msg": "This Ubuntu need to use apt"
}
```
  • 복수조건문

    when은 단일조건외에 복수도 사용가능하다.

      ---
      - hosts: all
    
        tasks:
          - name: Print os type
            ansible.builtin.debug:
              msg: >-
                   OS Type: {{ ansible_facts['distribution'] }}
                   OS Version: {{ ansible_facts['distribution_version'] }}
            when: > 
                ( ansible_facts['distribution'] == "Rocky" and
                  ansible_facts['distribution_version'] == "9.6" )
                or
                ( ansible_facts['distribution'] == "Ubuntu" and
                  ansible_facts['distribution_version'] == "24.04" )
    
      TASK [Print os type] ****************************************************************************
      ok: [tnode1] => {
          "msg": "OS Type: Ubuntu OS Version: 24.04"
      }
      ok: [tnode2] => {
          "msg": "OS Type: Ubuntu OS Version: 24.04"
      }
      ok: [tnode3] => {
          "msg": "OS Type: Rocky OS Version: 9.6"
      }
      root@server:~/my-ansible# ansible tnode1 -m ansible.builtin.setup | grep -iE 'os_family|ansible_distribution|fqdn'
              "ansible_distribution": "Ubuntu",
              "ansible_distribution_file_parsed": true,
              "ansible_distribution_file_path": "/etc/os-release",
              "ansible_distribution_file_variety": "Debian",
              "ansible_distribution_major_version": "24",
              "ansible_distribution_release": "noble",
              "ansible_distribution_version": "24.04",
              "ansible_fqdn": "tnode1",
              "ansible_os_family": "Debian",

핸들러 및 작업 실패 처리

  • 앤서블 핸들러

    • 앤서블에서 핸들러를 사용할때는 notify문을 사용하여 명시적으로 호출된 경우에만 사용이 가능하다.


    • hosts: tnode2
      tasks:

      • name: restart rsyslog
        ansible.builtin.service:
        name: "rsyslog"
        state: restarted
        notify:
        • print msg

      handlers:

      • name: print msg
        ansible.builtin.debug:
        msg: "rsyslog is restarted"
RUNNING HANDLER [print msg] *********************************************************************
ok: [tnode2] => {
    "msg": "rsyslog is restarted"
}

```
  • 작업실패무시

    • 앤서블은 플레이 시 각 작업의 반환 코드를 평가하여 작업의 성공 여부를 판단합니다. 일반적으로 작업이 실패하면 앤서블은 이후의 모든 작업을 건너뜁니다.

    • 하지만 작업이 실패해도 플레이를 계속 실행할 수 있습니다. 이는 ignore_errors라는 키워드로 구현할 수 있습니다.

        #아래와 같이 Print msg 태스크는 수행이 되지않는다.
        ---
        - hosts : **tnode1**
      
          tasks:
            - name: Install apache3
              ansible.builtin.**apt**:
                name: **apache3**
                state: latest
      
            - name: Print msg
              ansible.builtin.**debug**:
                msg: "Before task is ignored"
      
      
    root@server:~/my-ansible# ansible-playbook apache.yml 

    PLAY [tnode1] ***********************************************************************************

    TASK [Gathering Facts] **************************************************************************
    ok: [tnode1]

    TASK [Install apache3] **************************************************************************
    [ERROR]: Task failed: Module failed: No package matching 'apache3' is available
    Origin: /root/my-ansible/apache.yml:5:7

    3
    4   tasks:
    5     - name: Install apache3
            ^ column 7

    fatal: [tnode1]: FAILED! => {"changed": false, "msg": "No package matching 'apache3' is available"}

    PLAY RECAP **************************************************************************************
    tnode1                     : ok=1    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0
    ```

    ```bash
    # ignore_errors: yes 변수를 넣어주면 실패시에도 Print msg를 수행한다.
    ---
    - hosts : **tnode1**

      tasks:
        - name: Install apache3
          ansible.builtin.**apt**:
            name: **apache3**
            state: latest
          **ignore_errors: yes**

        - name: Print msg
          ansible.builtin.**debug**:
            msg: "Before task is ignored"

    root@server:~/my-ansible# ansible-playbook apache.yml 

    PLAY [tnode1] ***********************************************************************************

    TASK [Gathering Facts] **************************************************************************
    ok: [tnode1]

    TASK [Install apache3] **************************************************************************
    [ERROR]: Task failed: Module failed: No package matching 'apache3' is available
    Origin: /root/my-ansible/apache.yml:5:7

    3
    4   tasks:
    5     - name: Install apache3
            ^ column 7

    fatal: [tnode1]: FAILED! => {"changed": false, "msg": "No package matching 'apache3' is available"}
    ...ignoring

    TASK [Print msg] ********************************************************************************
    ok: [tnode1] => {
        "msg": "Before task is ignored"
    }
    ```
  • 작업 실패 조건지정

    • shell과 같은 커맨드형 스크립트 사용시 멱등성 보장이 힘들다. 셸스크립트보다는 모듈을 사용하도록한다.
  • 앤서블 블록 및 오류처리

    • block (작업 그룹화): 실행할 기본 작업들을 논리적으로 하나로 묶어 관리하는 단위

    • rescue (오류 복구): block 내 작업이 실패했을 때만 실행되는 예외 처리 및 복구 단계

    • always (강제 실행): 작업의 성공이나 실패 여부와 상관없이 마지막에 반드시 수행되는 마무리 단계


    • hosts: tnode2
      vars:
      logdir: /var/log/daily_log
      logfile: todays.log

      tasks:

      • name: Configure Log Env
        block:

        • name: Find Directory
          ansible.builtin.find:
          paths: "{{ logdir }}"
          register: result
          failed_when: "'Not all paths' in result.msg"

        rescue:

        • name: Make Directory when Not found Directory
          ansible.builtin.file:
          path: "{{ logdir }}"
          state: directory
          mode: '0755'

        always:

        • name: Create File
          ansible.builtin.file:
          path: "{{ logdir }}/{{ logfile }}"
          state: touch
          mode: '0644'

      #첫수행
      [ERROR]: Task failed: Action failed: Not all paths examined, check warnings for details
      Origin: /root/my-ansible/block.yml:10:11

      8 - name: Configure Log Env
      9 block:
      10 - name: Find Directory

             ^ column 11

      fatal: [tnode2]: FAILED! => {"changed": false, "examined": 0, "failed_when_result": true, "files": [], "matched": 0, "msg": "Not all paths examined, check warnings for details", "skipped_paths": {"/var/log/daily_log": "'/var/log/daily_log' is not a directory"}}

      #다시수행
      root@server:~/my-ansible# ansible-playbook block.yml

      PLAY [tnode2] ***

      TASK [Gathering Facts] **
      ok: [tnode2]

      TASK [Find Directory] ***
      ok: [tnode2]

      TASK [Create File] **
      changed: [tnode2]

      PLAY RECAP **
      tnode2 : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

      
      

롤 구조 소개 및 사용법

  • 재사용 및 모듈화: 플레이북 내용을 기능 단위로 쪼개어 부품처럼 관리하며, 언제든 코드를 재사용할 수 있는 구조

  • 효율적인 관리와 협업: 변수 설정 및 대규모 프로젝트 관리가 용이해져 여러 개발자가 동시에 작업유용

  • 생태계 활용: 잘 만든 롤은 앤서블 갤럭시(Ansible Galaxy)를 통해 전 세계 사용자와 공유하거나 가져다 쓸 수 있다

  • 롤생성

      root@server:~/my-ansible# ansible-galaxy role -h
      usage: ansible-galaxy role [-h] ROLE_ACTION ...
    
      positional arguments:
        ROLE_ACTION
          init       Initialize new role with the base structure of a role.
          remove     Delete roles from roles_path.
          delete     Removes the role from Galaxy. It does not remove or alter the actual GitHub
                     repository.
          list       Show the name and version of each role installed in the roles_path.
          search     Search the Galaxy database by tags, platforms, author and multiple keywords.
          import     Import a role into a galaxy server
          setup      Manage the integration between Galaxy and the given source.
          info       View more details about a specific role.
          install    Install role(s) from file(s), URL(s) or Ansible Galaxy
    
      options:
        -h, --help   show this help message and exit
    
       root@server:~/my-ansible# tree ./my-role/
      ./my-role/
      ├── defaults
      │   └── main.yml
      ├── files
      ├── handlers
      │   └── main.yml
      ├── meta
      │   └── main.yml
      ├── README.md
      ├── tasks
      │   └── main.yml
      ├── templates
      ├── tests
      │   ├── inventory
      │   └── test.yml
      └── vars
          └── main.yml
    
      9 directories, 8 files
  • 플레이북 개발

      #메인태스크 작성
      ~/my-ansible/**my-role/tasks/main.yml
    
      ---
      # tasks file for my-role
    
      - name: install service {{ service_title }}
        ansible.builtin.apt:
          name: "{{ item }}"
          state: latest
        loop: "{{ httpd_packages }}"
        when: ansible_facts.distribution in supported_distros
    
      - name: copy conf file
        ansible.builtin.copy:
          src: "{{ src_file_path }}"
          dest: "{{ dest_file_path }}"
        notify: 
          - restart service
      # index.html 생성**
      ~/my-ansible/**my-role/files/index.html
      root@server:~/my-ansible/my-role# echo "Study Let's go" > files/index.html
    
      # 핸들러 작성**
      ~/my-ansible/**my-role/handlers/main.yml**
      ---
      # handlers file for my-role
    
      - name: restart service
        ansible.builtin.**service**:
          name: "{{ service_name }}"
          state: **restarted
    
      #default(가변변수작성)**
      ~/my-ansible/**my-role/defaults/main.yml
    
      root@server:~/my-ansible/my-role# echo 'service_title: "Apache Web Server"' >> defaults/main.yml
    
      #vars(불변변수)
      ~/**my-ansible/**my-role/vars/main.yml**
    
  • 플레이북에 롤추가하기

      ~/my-ansible/**role-example.yml**
    
      ---
      - hosts: **tnode1**
    
        **tasks**:
          - name: Print start play
            ansible.builtin.**debug**:
              msg: "Let's start role play"
    
          - name: Install Service by role
            ansible.builtin.**import_role**:
              name: **my-role
    
    
##실행
root@server:~/my-ansible# ansible-playbook role-example.yml 

PLAY [tnode1] ***********************************************************************************

TASK [Gathering Facts] **************************************************************************
ok: [tnode1]

TASK [Print start play] *************************************************************************
ok: [tnode1] => {
    "msg": "Let's start role play"
}

TASK [my-role : install service Apache Web Server] **********************************************
changed: [tnode1] => (item=apache2)
changed: [tnode1] => (item=apache2-doc)

TASK [my-role : copy conf file] *****************************************************************
changed: [tnode1]

RUNNING HANDLER [my-role : restart service] *****************************************************
changed: [tnode1]

PLAY RECAP **************************************************************************************
tnode1                     : ok=5    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0**   

```

앤서블 갤럭시

  • 공식 공유 허브: 전 세계 사용자가 제작한 롤(Role)컬렉션(Collection)을 자유롭게 공유하고 찾는 저장소
  • 빠른 구축: ansible-galaxy 명령어로 검증된 코드를 즉시 내려받아 자동화 시스템 구축 시간을 획기적으로 단축가능
  • 표준화된 구조: init 기능을 통해 앤서블이 권장하는 표준 디렉터리 구조를 자동으로 생성하여 관리가 편리하다.
  • 커뮤니티 검증: 사용자 평점과 다운로드 수를 통해 신뢰할 수 있는 고품질의 자동화 코드를 선택하기 용이하다.
  • 버전 및 의존성 관리: 특정 버전의 코드를 지정해 설치할 수 있어 프로젝트의 일관성과 안정성 보장

'Study > K8S-Deploy' 카테고리의 다른 글

K8S) 1주차 과제  (0) 2026.01.08

1. Bootstrap Kubernetes the hard way

00. Kind K8s 설치

  • WSL2를 이용해서 진행(기설치 된kind를 통해서 실습을 진행한다.)

      (⎈|N/A:N/A) zosys@4:~$ kind version
      kind v0.30.0 go1.24.6 linux/amd64
      (⎈|N/A:N/A) zosys@4:~$ helm version
      version.BuildInfo{Version:"v3.19.0", GitCommit:"3d8990f0836691f0229297773f3524598f46bda6", GitTreeState:"clean", GoVersion:"go1.24.7"}
    
  • 1주차 실습을 위한 kind k8s 배포(WSL2)

      (⎈|N/A:N/A) zosys@4:~$ kind create cluster --name myk8s --image kindest/node:v1.32.8 --config - <<EOF
      kind: Cluster
      apiVersion: kind.x-k8s.io/v1alpha4
      nodes:
      - role: control-plane
        extraPortMappings:
        - containerPort: 30000
          hostPort: 30000
        - containerPort: 30001
          hostPort: 30001
      - role: worker
      EOF
      Creating cluster "myk8s" ...
       ✓ Ensuring node image (kindest/node:v1.32.8) 🖼
       ✓ Preparing nodes 📦 📦
       ✓ Writing configuration 📜
       ✓ Starting control-plane 🕹️
       ✓ Installing CNI 🔌
       ✓ Installing StorageClass 💾
       ✓ Joining worker nodes 🚜
      Set kubectl context to "kind-myk8s"
      You can now use your cluster with:
    
      (⎈|kind-myk8s:N/A) zosys@4:~$ kind get nodes --name myk8s
      myk8s-control-plane
      myk8s-worker
    
      (⎈|kind-myk8s:default) zosys@4:~$ kubectl get node -o wide
      NAME                  STATUS   ROLES           AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION                     CONTAINER-RUNTIME
      myk8s-control-plane   Ready    control-plane   2m30s   v1.32.8   172.18.0.3    <none>        Debian GNU/Linux 12 (bookworm)   6.6.87.2-microsoft-standard-WSL2   containerd://2.1.3
      myk8s-worker          Ready    <none>          2m14s   v1.32.8   172.18.0.2    <none>        Debian GNU/Linux 12 (bookworm)   6.6.87.2-microsoft-standard-WSL2   containerd://2.1.3
      (⎈|kind-myk8s:default) zosys@4:~$ docker exec -it myk8s-control-plane ss -tnlp
      State   Recv-Q  Send-Q   Local Address:Port    Peer Address:Port Process
      LISTEN  0       4096         127.0.0.1:45859        0.0.0.0:*     users:(("containerd",pid=106,fd=11))
      LISTEN  0       4096        172.18.0.3:2379         0.0.0.0:*     users:(("etcd",pid=645,fd=9))
      LISTEN  0       4096        172.18.0.3:2380         0.0.0.0:*     users:(("etcd",pid=645,fd=7))
      LISTEN  0       4096        127.0.0.11:46007        0.0.0.0:*
      LISTEN  0       4096         127.0.0.1:10248        0.0.0.0:*     users:(("kubelet",pid=706,fd=20))
      LISTEN  0       4096         127.0.0.1:10249        0.0.0.0:*     users:(("kube-proxy",pid=899,fd=17))
      LISTEN  0       4096         127.0.0.1:10259        0.0.0.0:*     users:(("kube-scheduler",pid=514,fd=3))
      LISTEN  0       4096         127.0.0.1:10257        0.0.0.0:*     users:(("kube-controller",pid=554,fd=3))
      LISTEN  0       4096         127.0.0.1:2379         0.0.0.0:*     users:(("etcd",pid=645,fd=8))
      LISTEN  0       4096         127.0.0.1:2381         0.0.0.0:*     users:(("etcd",pid=645,fd=16))
      LISTEN  0       4096                 *:6443               *:*     users:(("kube-apiserver",pid=566,fd=3))
      LISTEN  0       4096                 *:10256              *:*     users:(("kube-proxy",pid=899,fd=16))
      LISTEN  0       4096                 *:10250              *:*     users:(("kubelet",pid=706,fd=22))
    

01. Pre requisites

  • 실습용 Vagrant배포의 경우 리소스 문제로 다른 PC에서 진행한다.

    
      PS C:\Users\bom\Desktop\스터디\onpremisk8s> dir
    
          디렉터리: C:\Users\bom\Desktop\스터디\onpremisk8s
    
      Mode                 LastWriteTime         Length Name
      ----                 -------------         ------ ----
      -a----      2026-01-05  오후 11:09           1172 init_cfg.sh
      -a----      2026-01-05  오후 11:08           3234 Vagrantfile
    
      PS C:\Users\bom\Desktop\스터디\onpremisk8s> vagrant.exe up
      Bringing machine 'jumpbox' up with 'virtualbox' provider...
      Bringing machine 'server' up with 'virtualbox' provider...
      Bringing machine 'node-0' up with 'virtualbox' provider...
      Bringing machine 'node-1' up with 'virtualbox' provider...
      ==> jumpbox: Box 'bento/debian-12' could not be found. Attempting to find and install...
          jumpbox: Box Provider: virtualbox
          jumpbox: Box Version: 202510.26.0
      ==> jumpbox: Loading metadata for box 'bento/debian-12'
          jumpbox: URL: https://vagrantcloud.com/api/v2/vagrant/bento/debian-12
      ==> jumpbox: Adding box 'bento/debian-12' (v202510.26.0) for provider: virtualbox (amd64)
          jumpbox: Downloading: https://vagrantcloud.com/bento/boxes/debian-12/versions/202510.26.0/providers/virtualbox/amd64/vagrant.box
    
      ############################중략############################################    
    
      #배포 가상머신 확인   
         PS C:\Users\bom\Desktop\스터디\onpremisk8s> vagrant status
      Current machine states:
    
      jumpbox                   running (virtualbox)
      server                    running (virtualbox)
      node-0                    running (virtualbox)
      node-1                    running (virtualbox) 
  • jumpbox 가상머신 접속 : vagrant ssh jumpbox

      PS C:\Users\bom\Desktop\스터디\onpremisk8s> vagrant ssh jumpbox
      Linux jumpbox 6.1.0-40-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.153-1 (2025-09-20) x86_64
    
      This system is built by the Bento project by Chef Software
      More information can be found at https://github.com/chef/bento
    
      Use of this system is acceptance of the OS vendor EULA and License Agreements.
    
      The programs included with the Debian GNU/Linux system are free software;
      the exact distribution terms for each program are described in the
      individual files in /usr/share/doc/*/copyright.
    
      Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
      permitted by applicable law.
      root@jumpbox:~# whoami
      root
      root@jumpbox:~# pwd
      /root
      root@jumpbox:~#

02 - Set up The jumpbox

  • vagrant ssh jumpbox

      root@jumpbox:~# whoami
      root
      root@jumpbox:~# pwd
      /root
      root@jumpbox:~# cat /home/vagrant/.bashrc | tail -n 1
      sudo su -
      root@jumpbox:~# git clone --depth 1 https://github.com/kelseyhightower/kubernetes-the-hard-way.git
      Cloning into 'kubernetes-the-hard-way'...
      remote: Enumerating objects: 41, done.
      remote: Counting objects: 100% (41/41), done.
      remote: Compressing objects: 100% (40/40), done.
      remote: Total 41 (delta 3), reused 14 (delta 1), pack-reused 0 (from 0)
      Receiving objects: 100% (41/41), 29.27 KiB | 14.63 MiB/s, done.
      Resolving deltas: 100% (3/3), done.
      root@jumpbox:~# cd kubernetes-the-hard-way/
      root@jumpbox:~/kubernetes-the-hard-way# tree
      .
      ├── ca.conf
      ├── configs
      │   ├── 10-bridge.conf
      │   ├── 99-loopback.conf
      │   ├── containerd-config.toml
      │   ├── encryption-config.yaml
      │   ├── kube-apiserver-to-kubelet.yaml
      │   ├── kubelet-config.yaml
      │   ├── kube-proxy-config.yaml
      │   └── kube-scheduler.yaml
      ├── CONTRIBUTING.md
      ├── COPYRIGHT.md
      ├── docs
      │   ├── 01-prerequisites.md
      │   ├── 02-jumpbox.md
      │   ├── 03-compute-resources.md
      │   ├── 04-certificate-authority.md
      │   ├── 05-kubernetes-configuration-files.md
      │   ├── 06-data-encryption-keys.md
      │   ├── 07-bootstrapping-etcd.md
      │   ├── 08-bootstrapping-kubernetes-controllers.md
      │   ├── 09-bootstrapping-kubernetes-workers.md
      │   ├── 10-configuring-kubectl.md
      │   ├── 11-pod-network-routes.md
      │   ├── 12-smoke-test.md
      │   └── 13-cleanup.md
      ├── downloads-amd64.txt
      ├── downloads-arm64.txt
      ├── LICENSE
      ├── README.md
      └── units
          ├── containerd.service
          ├── etcd.service
          ├── kube-apiserver.service
          ├── kube-controller-manager.service
          ├── kubelet.service
          ├── kube-proxy.service
          └── kube-scheduler.service
    
      4 directories, 35 files
    
      #CPU 아키텍쳐 확인
    
      root@jumpbox:~/kubernetes-the-hard-way# dpkg --print-architecture
      amd64
      root@jumpbox:~/kubernetes-the-hard-way# ls -l downloads-*
      -rw-r--r-- 1 root root 839 Jan  5 23:25 downloads-amd64.txt
      -rw-r--r-- 1 root root 839 Jan  5 23:25 downloads-arm64.txt
    
      root@jumpbox:~/kubernetes-the-hard-way# cat downloads-$(dpkg --print-architecture).txt
      https://dl.k8s.io/v1.32.3/bin/linux/amd64/kubectl
      https://dl.k8s.io/v1.32.3/bin/linux/amd64/kube-apiserver
      https://dl.k8s.io/v1.32.3/bin/linux/amd64/kube-controller-manager
      https://dl.k8s.io/v1.32.3/bin/linux/amd64/kube-scheduler
      https://dl.k8s.io/v1.32.3/bin/linux/amd64/kube-proxy
      https://dl.k8s.io/v1.32.3/bin/linux/amd64/kubelet
      https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.32.0/crictl-v1.32.0-linux-amd64.tar.gz
      https://github.com/opencontainers/runc/releases/download/v1.3.0-rc.1/runc.amd64
      https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz
      https://github.com/containerd/containerd/releases/download/v2.1.0-beta.0/containerd-2.1.0-beta.0-linux-amd64.tar.gz
      https://github.com/etcd-io/etcd/releases/download/v3.6.0-rc.3/etcd-v3.6.0-rc.3-linux-amd64.tar.gz
      root@jumpbox:~/kubernetes-the-hard-way# wget -q --show-progress \
        --https-only \
        --timestamping \
        -P downloads \
        -i downloads-$(dpkg --print-architecture).txt
      kubectl                       100%[=================================================>]  54.67M  64.4MB/s    in 0.8s
      kube-apiserver                100%[=================================================>]  88.94M  28.8MB/s    in 3.1s
      kube-controller-manager       100%[=================================================>]  82.00M  17.7MB/s    in 4.6s
      kube-scheduler                100%[=================================================>]  62.79M  22.0MB/s    in 2.9s
      kube-proxy                    100%[=================================================>]  63.75M  25.1MB/s    in 2.5s
      kubelet                       100%[=================================================>]  73.82M  12.8MB/s    in 6.5s
      crictl-v1.32.0-linux-amd64.ta 100%[=================================================>]  18.21M  55.1MB/s    in 0.3s
      runc.amd64                    100%[=================================================>]  11.30M  66.3MB/s    in 0.2s
      cni-plugins-linux-amd64-v1.6. 100%[=================================================>]  50.35M  36.3MB/s    in 1.4s
      containerd-2.1.0-beta.0-linux 100%[=================================================>]  37.01M  56.7MB/s    in 0.7s
      etcd-v3.6.0-rc.3-linux-amd64. 100%[=================================================>]  22.48M  47.2MB/s    in 0.5s
    
      root@jumpbox:~/kubernetes-the-hard-way# ls -oh downloads
      total 566M
      -rw-r--r-- 1 root 51M Jan  7  2025 cni-plugins-linux-amd64-v1.6.2.tgz
      -rw-r--r-- 1 root 38M Mar 18  2025 containerd-2.1.0-beta.0-linux-amd64.tar.gz
      -rw-r--r-- 1 root 19M Dec  9  2024 crictl-v1.32.0-linux-amd64.tar.gz
      -rw-r--r-- 1 root 23M Mar 28  2025 etcd-v3.6.0-rc.3-linux-amd64.tar.gz
      -rw-r--r-- 1 root 89M Mar 12  2025 kube-apiserver
      -rw-r--r-- 1 root 83M Mar 12  2025 kube-controller-manager
      -rw-r--r-- 1 root 55M Mar 12  2025 kubectl
      -rw-r--r-- 1 root 74M Mar 12  2025 kubelet
      -rw-r--r-- 1 root 64M Mar 12  2025 kube-proxy
      -rw-r--r-- 1 root 63M Mar 12  2025 kube-scheduler
      -rw-r--r-- 1 root 12M Mar  4  2025 runc.amd64
    
      root@jumpbox:~/kubernetes-the-hard-way# mkdir -p downloads/{client,cni-plugins,controller,worker}
      root@jumpbox:~/kubernetes-the-hard-way# tree -d downloads
      downloads
      ├── client
      ├── cni-plugins
      ├── controller
      └── worker
    
      5 directories
    
      #압축풀기 
      root@jumpbox:~/kubernetes-the-hard-way# tar -xvf downloads/crictl-v1.32.0-linux-${ARCH}.tar.gz \
        -C downloads/worker/ && tree -ug downloads
      crictl
      [root     root    ]  downloads
      ├── [root     root    ]  client
      ├── [root     root    ]  cni-plugins
      ├── [root     root    ]  cni-plugins-linux-amd64-v1.6.2.tgz
      ├── [root     root    ]  containerd-2.1.0-beta.0-linux-amd64.tar.gz
      ├── [root     root    ]  controller
      ├── [root     root    ]  crictl-v1.32.0-linux-amd64.tar.gz
      ├── [root     root    ]  etcd-v3.6.0-rc.3-linux-amd64.tar.gz
      ├── [root     root    ]  kube-apiserver
      ├── [root     root    ]  kube-controller-manager
      ├── [root     root    ]  kubectl
      ├── [root     root    ]  kubelet
      ├── [root     root    ]  kube-proxy
      ├── [root     root    ]  kube-scheduler
      ├── [root     root    ]  runc.amd64
      └── [root     root    ]  worker
          └── [1001     127     ]  crictl
    
      5 directories, 12 files
    
      ################그외 중략 ########################
    
      #압축해제 후 확인진행
      root@jumpbox:~/kubernetes-the-hard-way# tree downloads/worker/
      downloads/worker/
      ├── containerd
      ├── containerd-shim-runc-v2
      ├── containerd-stress
      ├── crictl
      └── ctr
    
      1 directory, 5 files
      root@jumpbox:~/kubernetes-the-hard-way# tree downloads/cni-plugins
      downloads/cni-plugins
      ├── bandwidth
      ├── bridge
      ├── dhcp
      ├── dummy
      ├── firewall
      ├── host-device
      ├── host-local
      ├── ipvlan
      ├── LICENSE
      ├── loopback
      ├── macvlan
      ├── portmap
      ├── ptp
      ├── README.md
      ├── sbr
      ├── static
      ├── tap
      ├── tuning
      ├── vlan
      └── vrf
    
      1 directory, 20 files
      #파일 이동 및 확인
      root@jumpbox:~/kubernetes-the-hard-way# mv downloads/{etcdctl,kubectl} downloads/client/
      mv downloads/{etcd,kube-apiserver,kube-controller-manager,kube-scheduler} downloads/controller/
      mv downloads/{kubelet,kube-proxy} downloads/worker/
      mv downloads/runc.${ARCH} downloads/worker/runc
    
      root@jumpbox:~/kubernetes-the-hard-way# tree downloads/client/
      tree downloads/controller/
      tree downloads/worker/
      downloads/client/
      ├── etcdctl
      └── kubectl
    
      1 directory, 2 files
      downloads/controller/
      ├── etcd
      ├── kube-apiserver
      ├── kube-controller-manager
      └── kube-scheduler
    
      1 directory, 4 files
      downloads/worker/
      ├── containerd
      ├── containerd-shim-runc-v2
      ├── containerd-stress
      ├── crictl
      ├── ctr
      ├── kubelet
      ├── kube-proxy
      └── runc
    
      1 directory, 8 files
      #그외 작업 진행후 최종 kubtectl 확인
      root@jumpbox:~/kubernetes-the-hard-way# kubectl version --client
      Client Version: v1.32.3
      Kustomize Version: v5.5.0

03 - Provisioning Compute Resources

  • SSH 접속 환경 설정

      root@jumpbox:~/kubernetes-the-hard-way# cat <<EOF > machines.txt
      192.168.10.100 server.kubernetes.local server
      192.168.10.101 node-0.kubernetes.local node-0 10.200.0.0/24
      192.168.10.102 node-1.kubernetes.local node-1 10.200.1.0/24
      EOF
      root@jumpbox:~/kubernetes-the-hard-way# cat machines.txt
      192.168.10.100 server.kubernetes.local server
      192.168.10.101 node-0.kubernetes.local node-0 10.200.0.0/24
      192.168.10.102 node-1.kubernetes.local node-1 10.200.1.0/24
    
      root@jumpbox:~/kubernetes-the-hard-way# while read IP FQDN HOST SUBNET; do
        echo "${IP} ${FQDN} ${HOST} ${SUBNET}"
      done < machines.txt
      192.168.10.100 server.kubernetes.local server
      192.168.10.101 node-0.kubernetes.local node-0 10.200.0.0/24
      192.168.10.102 node-1.kubernetes.local node-1 10.200.1.0/24
    
      root@jumpbox:~/kubernetes-the-hard-way# ssh-keygen -t rsa -N "" -f /root/.ssh/id_rsa
      Generating public/private rsa key pair.
      Your identification has been saved in /root/.ssh/id_rsa
      Your public key has been saved in /root/.ssh/id_rsa.pub
      The key fingerprint is:
      SHA256:------------------------------------ root@jumpbox
      The key's randomart image is:
      +---[RSA 3072]----+
      |      .+o.B=*++++|
      |     oooEo Xo=oB |
      |      ooo =.+o*..|
      |     . o..o . .=.|
      |      ..So .   .o|
      |      . . .      |
      |       + o       |
      |      + +        |
      |     . .         |
      +----[SHA256]-----+
    
      root@jumpbox:~/kubernetes-the-hard-way# while read IP FQDN HOST SUBNET; do
        sshpass -p 'qwe123' ssh-copy-id -o StrictHostKeyChecking=no root@${IP}
      done < machines.txt
      /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
      /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
      /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
    
      Number of key(s) added: 1
    
      Now try logging into the machine, with:   "ssh -o 'StrictHostKeyChecking=no' 'root@192.168.10.100'"
      and check to make sure that only the key(s) you wanted were added.
    
      /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
      /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
      /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
    
      Number of key(s) added: 1
    
      Now try logging into the machine, with:   "ssh -o 'StrictHostKeyChecking=no' 'root@192.168.10.101'"
      and check to make sure that only the key(s) you wanted were added.
    
      /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
      /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
      /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
    
      Number of key(s) added: 1
    
      Now try logging into the machine, with:   "ssh -o 'StrictHostKeyChecking=no' 'root@192.168.10.102'"
      and check to make sure that only the key(s) you wanted were added.
    
      root@jumpbox:~/kubernetes-the-hard-way# while read IP FQDN HOST SUBNET; do
        ssh -n root@${IP} cat /root/.ssh/authorized_keys
      done < machines.txt
      ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCUoQb+fRLPJN03IsqZiCa0nxOuMA7Mo5VpVml+8XsCPa0JCoGxlHn5C8+xmPbp5qg4VhNVRVBIyt3/ipgURJbhgxw/Yo+3Tcq0C5BcmxzrDZGfK8mUAGXPWrHPDtECgZP4+kRyFcGMOwJCJvjTlbFXc/cDypp9RpDtbAXsBR/P+M9gYHtcAI2VRJMjHS0yTvFzf01WwoCYBWg5QG7NgIKVS2qMS75kAdnveBT+nu5E5TN2TmCi5vaD64LC1uuhg3NHDdUw14U0wAENNphQleERk0Y0jvcnFsf5XT6+KNYCfZjgkBPvBCcJRq2szo8Df740lGVoe8vWttAg79DkCB/QZV3UT+k+UN89gskAFtnWKv3MnSsAsjcevxSMsky3eEbZJ5lrN+NC32bTUkovb0paQUhDUf/gHsPQtwD/8FIolNWXopB2cZkwFZIkqs8JIV1N+C5fDczVsioJRgJmJQ53P/lliQXn82hlfe1/ZB+ZxO4mTHHHWC765b8dV0CVn2k= root@jumpbox
      ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCUoQb+fRLPJN03IsqZiCa0nxOuMA7Mo5VpVml+8XsCPa0JCoGxlHn5C8+xmPbp5qg4VhNVRVBIyt3/ipgURJbhgxw/Yo+3Tcq0C5BcmxzrDZGfK8mUAGXPWrHPDtECgZP4+kRyFcGMOwJCJvjTlbFXc/cDypp9RpDtbAXsBR/P+M9gYHtcAI2VRJMjHS0yTvFzf01WwoCYBWg5QG7NgIKVS2qMS75kAdnveBT+nu5E5TN2TmCi5vaD64LC1uuhg3NHDdUw14U0wAENNphQleERk0Y0jvcnFsf5XT6+KNYCfZjgkBPvBCcJRq2szo8Df740lGVoe8vWttAg79DkCB/QZV3UT+k+UN89gskAFtnWKv3MnSsAsjcevxSMsky3eEbZJ5lrN+NC32bTUkovb0paQUhDUf/gHsPQtwD/8FIolNWXopB2cZkwFZIkqs8JIV1N+C5fDczVsioJRgJmJQ53P/lliQXn82hlfe1/ZB+ZxO4mTHHHWC765b8dV0CVn2k= root@jumpbox
      ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCUoQb+fRLPJN03IsqZiCa0nxOuMA7Mo5VpVml+8XsCPa0JCoGxlHn5C8+xmPbp5qg4VhNVRVBIyt3/ipgURJbhgxw/Yo+3Tcq0C5BcmxzrDZGfK8mUAGXPWrHPDtECgZP4+kRyFcGMOwJCJvjTlbFXc/cDypp9RpDtbAXsBR/P+M9gYHtcAI2VRJMjHS0yTvFzf01WwoCYBWg5QG7NgIKVS2qMS75kAdnveBT+nu5E5TN2TmCi5vaD64LC1uuhg3NHDdUw14U0wAENNphQleERk0Y0jvcnFsf5XT6+KNYCfZjgkBPvBCcJRq2szo8Df740lGVoe8vWttAg79DkCB/QZV3UT+k+UN89gskAFtnWKv3MnSsAsjcevxSMsky3eEbZJ5lrN+NC32bTUkovb0paQUhDUf/gHsPQtwD/8FIolNWXopB2cZkwFZIkqs8JIV1N+C5fDczVsioJRgJmJQ53P/lliQXn82hlfe1/ZB+ZxO4mTHHHWC765b8dV0CVn2k= root@jumpbox
      root@jumpbox:~/kubernetes-the-hard-way# while read IP FQDN HOST SUBNET; do
        ssh -n root@${IP} hostname
      done < machines.txt
      server
      node-0
      node-1
    
      #확인진행
      root@jumpbox:~/kubernetes-the-hard-way# while read IP FQDN HOST SUBNET; do
        ssh -n root@${IP} hostname --fqdn
      done < machines.txt
      server.kubernetes.local
      node-0.kubernetes.local
      node-1.kubernetes.local
    
      root@jumpbox:~/kubernetes-the-hard-way# cat /etc/hosts
      while read IP FQDN HOST SUBNET; do
        sshpass -p 'qwe123' ssh -n -o StrictHostKeyChecking=no root@${HOST} hostname
      done < machines.txt
      127.0.0.1       localhost
    
      # The following lines are desirable for IPv6 capable hosts
      ::1     localhost ip6-localhost ip6-loopback
      ff02::1 ip6-allnodes
      ff02::2 ip6-allrouters
      192.168.10.10  jumpbox
      192.168.10.100 server.kubernetes.local server
      192.168.10.101 node-0.kubernetes.local node-0
      192.168.10.102 node-1.kubernetes.local node-1
      Warning: Permanently added 'server' (ED25519) to the list of known hosts.
      server
      Warning: Permanently added 'node-0' (ED25519) to the list of known hosts.
      node-0
      Warning: Permanently added 'node-1' (ED25519) to the list of known hosts.
      node-1

04 - Provisioning a CA and Generating TLS Certificates

  • 상호 TLS 인증(MTLS)를 사용하여 통신을 진행한다. 즉 통신하는 컴포넌트의 인증서는 전부 생성이 필요하며, 일반적인 k8s의 경우 kubeadm이 생성해준다. 아래는 생성해야하는 목록이다.
|  | 개인키 | CSR | 인증서 | 참고 정보 |
| --- | --- | --- | --- | --- |
| Root CA | ca.key | X | ca.crt |  |
| admin | admin.key | admin.csr | admin.crt | CN = admin, O = system:masters |
| node-0 | node-0.key | node-0.csr | node-0.crt | CN = system:node:node-0, O = system:nodes |
| node-1 | node-1.key | node-1.csr | node-1.crt | CN = system:node:node-1, O = system:nodes |
| kube-proxy | kube-proxy.key | kube-proxy.csr | kube-proxy.crt | CN = system:kube-proxy, O = system:node-proxier |
| kube-scheduler | kube-scheduler.key | kube-scheduler | kube-scheduler.crt | CN = system:kube-scheduler, O = system:kube-scheduler |
| kube-controller-manager | kube-controller-manager.key | kube-controller-manager.csr | kube-controller-manager.crt | CN = system:kube-controller-manager, O = system:kube-controller-manager |
| kube-api-server | kube-api-server.key | kube-api-server.csr | kube-api-server.crt | CN = kubernetes, SAN: IP(127.0.0.1, **10.32.0.1**), DNS(kubernetes,..) |
| service-accounts | service-accounts.key | service-accounts.csr | service-accounts.crt | CN = service-accounts |
- kind환경의 인증서 확인

    ```bash
    (⎈|kind-myk8s:default) zosys@4:~$ docker exec -i myk8s-control-plane kubeadm certs check-expiration
    [check-expiration] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
    [check-expiration] Use 'kubeadm init phase upload-config --config your-config.yaml' to re-upload it.

    CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
    admin.conf                 Jan 05, 2027 13:52 UTC   364d            ca                      no
    apiserver                  Jan 05, 2027 13:52 UTC   364d            ca                      no
    apiserver-etcd-client      Jan 05, 2027 13:52 UTC   364d            etcd-ca                 no
    apiserver-kubelet-client   Jan 05, 2027 13:52 UTC   364d            ca                      no
    controller-manager.conf    Jan 05, 2027 13:52 UTC   364d            ca                      no
    etcd-healthcheck-client    Jan 05, 2027 13:52 UTC   364d            etcd-ca                 no
    etcd-peer                  Jan 05, 2027 13:52 UTC   364d            etcd-ca                 no
    etcd-server                Jan 05, 2027 13:52 UTC   364d            etcd-ca                 no
    front-proxy-client         Jan 05, 2027 13:52 UTC   364d            front-proxy-ca          no
    scheduler.conf             Jan 05, 2027 13:52 UTC   364d            ca                      no
    super-admin.conf           Jan 05, 2027 13:52 UTC   364d            ca                      no

    CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
    ca                      Jan 03, 2036 13:52 UTC   9y              no
    etcd-ca                 Jan 03, 2036 13:52 UTC   9y              no
    front-proxy-ca          Jan 03, 2036 13:52 UTC   9y              no

    ```
  • CA 생성

      root@jumpbox:~/kubernetes-the-hard-way# openssl genrsa -out ca.key 4096
      root@jumpbox:~/kubernetes-the-hard-way# ls -l ca.key
      -rw------- 1 root root 3272 Jan  6 00:05 ca.key
      root@jumpbox:~/kubernetes-the-hard-way# openssl rsa -in ca.key -text -noout
      Private-Key: (4096 bit, 2 primes)
      modulus:
          00:bb:3b:8f:cf:85:b5:3e:48:2e:9a:ff:ce:c7:99:
          5b:3a:de:7b:28:47:1e:92:a5:a0:9b:b5:d1:fd:9e:
          10:af:49:b1:6b:12:2d:cb:78:cb:e0:a4:c5:9d:d4:
          b8:52:60:46:38:37:bb:f8:c7:a9:1e:d7:d5:55:82:
          30:8c:80:ae:66:8d:83:25:18:2a:21:d4:66:8c:db:
          3c:c2:4c:d0:e0:15:a4:b2:d0:1b:9d:ae:9d:9e:bd:
          2b:57:f1:b6:b8:f0:ad:9b:06:90:43:7e:8c:58:5d:
    
          #################중략#########################
    
      root@jumpbox:~/kubernetes-the-hard-way# openssl req -x509 -new -sha512 -noenc \
        -key ca.key -days 3653 \
        -config ca.conf \
        -out ca.crt
      root@jumpbox:~/kubernetes-the-hard-way# ls -l ca.crt
      -rw-r--r-- 1 root root 1899 Jan  6 00:06 ca.crt
    
      #인증서 전체내용 확인
      root@jumpbox:~/kubernetes-the-hard-way# openssl x509 -in ca.crt -text -noout | more
      Certificate:
          Data:
              Version: 3 (0x2)
              Serial Number:
                  21:a5:b4:64:7c:a3:3d:a7:a4:3f:40:14:05:e9:25:b9:21:a9:7e:be
              Signature Algorithm: sha512WithRSAEncryption
              Issuer: C = US, ST = Washington, L = Seattle, CN = CA
              Validity
                  Not Before: Jan  5 15:06:27 2026 GMT
                  Not After : Jan  6 15:06:27 2036 GMT
              Subject: C = US, ST = Washington, L = Seattle, CN = CA
              Subject Public Key Info:
                  Public Key Algorithm: rsaEncryption
                      Public-Key: (4096 bit)
      ############################중략 #################################
    • 참고) kind환경의 ca.crt 확인

      (⎈|kind-myk8s:default) zosys@4:~$ docker exec -i myk8s-control-plane cat /etc/kubernetes/pki/ca.crt | openssl x509 -text -noout
      Certificate:
        Data:
            Version: 3 (0x2)
            Serial Number: 8519439439972151590 (0x763b210860fe3126)
            Signature Algorithm: sha256WithRSAEncryption
            Issuer: CN = kubernetes
            Validity
                Not Before: Jan  5 13:47:29 2026 GMT
                Not After : Jan  3 13:52:29 2036 GMT
            Subject: CN = kubernetes
            Subject Public Key Info:
                Public Key Algorithm: rsaEncryption
                    Public-Key: (2048 bit)
                    Modulus:
                        00:d4:a5:b1:57:06:bc:c7:ff:85:ba:d6:44:74:8c:
                        0a:a4:3e:3c:c7:e7:15:09:f9:31:f1:a0:8e:92:d0:
                        90:f8:19:52:98:8e:29:92:17:37:8d:9d:fe:10:fa:
                        d7:0f:1e:41:89:53:62:18:d9:b5:c4:b7:43:50:e6:
                        5c:fd:2c:87:46:a7:54:b0:ea:ea:6d:f6:d8:93:cc:
                        1f:2f:7a:18:28:6e:17:e7:e2:e8:3b:26:c8:ae:3d:
      
      ### 중략#####
  • admin인증서 생성

      root@jumpbox:~/kubernetes-the-hard-way# openssl genrsa -out admin.key 4096
      root@jumpbox:~/kubernetes-the-hard-way# openssl req -new -key admin.key -sha256 \
        -config ca.conf -section admin \
        -out admin.csr
      root@jumpbox:~/kubernetes-the-hard-way# ls -l admin.csr
      -rw-r--r-- 1 root root 1830 Jan  6 00:14 admin.csr
    
      root@jumpbox:~/kubernetes-the-hard-way# openssl req -in admin.csr -text -noout | more
      Certificate Request:
          Data:
              Version: 1 (0x0)
              Subject: CN = admin, O = system:masters
              Subject Public Key Info:
                  Public Key Algorithm: rsaEncryption
                      Public-Key: (4096 bit)
                      Modulus:
                          00:93:3b:e0:7a:87:5d:80:69:b1:ad:6a:38:74:55:
                          c0:a5:d4:5f:f6:cc:9d:b7:3c:6a:dc:7c:f1:fd:cc:
                          c1:dd:00:27:5e:6c:7f:2d:77:81:02:7c:ce:55:36:
                          79:75:dc:83:bf:60:23:67:df:32:53:cc:19:15:0e:
                          04:23:7f:d7:79:da:43:79:a9:57:9d:8e:dd:96:d2:
        ### 중략####
    
      root@jumpbox:~/kubernetes-the-hard-way# openssl x509 -req -days 3653 -in admin.csr \
        -copy_extensions copyall \
        -sha256 -CA ca.crt \
        -CAkey ca.key \
        -CAcreateserial \
        -out admin.crt
      Certificate request self-signature ok
      subject=CN = admin, O = system:masters
      root@jumpbox:~/kubernetes-the-hard-way# ls -l admin.crt
      openssl x509 -in admin.crt -text -noout | more
      -rw-r--r-- 1 root root 2021 Jan  6 00:16 admin.crt
      Certificate:
          Data:
              Version: 3 (0x2)
              Serial Number:
                  36:ea:c4:ee:d2:b3:c2:0c:0d:4c:4a:26:a3:56:3b:e1:a2:22:fa:c7
              Signature Algorithm: sha256WithRSAEncryption
              Issuer: C = US, ST = Washington, L = Seattle, CN = CA
              Validity
                  Not Before: Jan  5 15:16:49 2026 GMT
                  Not After : Jan  6 15:16:49 2036 GMT
              Subject: CN = admin, O = system:masters
              Subject Public Key Info:
                  Public Key Algorithm: rsaEncryption
                      Public-Key: (4096 bit)
                      Modulus:
                          00:93:3b:e0:7a:87:5d:80:69:b1:ad:6a:38:74:55:
                          c0:a5:d4:5f:f6:cc:9d:b7:3c:6a:dc:7c:f1:fd:cc:
                          c1:dd:00:27:5e:6c:7f:2d:77:81:02:7c:ce:55:36:
                          79:75:dc:83:bf:60:23:67:df:32:53:cc:19:15:0e:
                          04:23:7f:d7:79:da:43:79:a9:57:9d:8e:dd:96:d2:
                          2f:74:8a:83:41:80:01:21:f3:96:63:87:b6:27:65:
                          b0:e4:aa:5d:1d:31:a4:15:01:c5:98:66:ec:5a:9e:
                          27:78:2d:e8:2f:d1:e6:20:d6:15:d4:1e:43:2a:77:
          ### 중략####
    
    • 참고 ) kind환경의 admin인증서 확인

        (⎈|kind-myk8s:default) zosys@4:~$ docker exec -i myk8s-control-plane cat /etc/kubernetes/super-admin.conf | grep client-certificate-data | cut -d ':' -f2 | tr -d ' ' | base64 -d | openssl x509 -text -noout
        Certificate:
            Data:
                Version: 3 (0x2)
                Serial Number: 1007749187623256237 (0xdfc3e8bfef900ad)
                Signature Algorithm: sha256WithRSAEncryption
                Issuer: CN = kubernetes
                Validity
                    Not Before: Jan  5 13:47:29 2026 GMT
                    Not After : Jan  5 13:52:29 2027 GMT
                Subject: O = system:masters, CN = kubernetes-super-admin
                Subject Public Key Info:
                    Public Key Algorithm: rsaEncryption
                        Public-Key: (2048 bit)
                        Modulus:
                            00:a4:12:67:9f:3d:22:5b:a0:f8:0c:05:5c:d0:11:
                            2c:cb:98:55:7e:d8:84:a9:cc:39:6d:89:c0:c2:12:
                            60:e1:32:ed:28:a4:33:2d:67:89:20:0e:f9:c1:d6:
                            bb:08:a7:9e:ec:f5:0a:de:9c:ca:ea:ed:82:da:50:
                            35:d7:92:2c:85:f0:df:2c:e3:d1:7f:ca:e0:52:32:
      
        # krew rbac-tool 플러그인 활용하기
        (⎈|kind-myk8s:default) zosys@4:~$ kubectl rbac-tool lookup system:masters
          SUBJECT        | SUBJECT TYPE | SCOPE       | NAMESPACE | ROLE          | BINDING
        -----------------+--------------+-------------+-----------+---------------+----------------
          system:masters | Group        | ClusterRole |           | cluster-admin | cluster-admin
      
        (⎈|kind-myk8s:default) zosys@4:~$ kubectl describe clusterroles cluster-admin
        Name:         cluster-admin
        Labels:       kubernetes.io/bootstrapping=rbac-defaults
        Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
        PolicyRule:
          Resources  Non-Resource URLs  Resource Names  Verbs
          ---------  -----------------  --------------  -----
          *.*        []                 []              [*]
                     [*]                []              [*]
      
        (⎈|kind-myk8s:default) zosys@4:~$ kubectl describe clusterrolebindings cluster-admin
        Name:         cluster-admin
        Labels:       kubernetes.io/bootstrapping=rbac-defaults
        Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
        Role:
          Kind:  ClusterRole
          Name:  cluster-admin
        Subjects:
          Kind   Name            Namespace
          ----   ----            ---------
          Group  system:masters
      
  • 나머지 인증서 생성

      #인증서 생성전 오타 변경
      root@jumpbox:~/kubernetes-the-hard-way# sed -i 's/system:system:kube-scheduler/system:kube-scheduler/' ca.conf
      root@jumpbox:~/kubernetes-the-hard-way# cat ca.conf | grep system:kube-scheduler
      CN = system:kube-scheduler
      O  = system:kube-scheduler
    
      root@jumpbox:~/kubernetes-the-hard-way# cat ca.conf | grep system:kube-scheduler
      CN = system:kube-scheduler
      O  = system:kube-scheduler
      root@jumpbox:~/kubernetes-the-hard-way# certs=(
        "node-0" "node-1"
        "kube-proxy" "kube-scheduler"
        "kube-controller-manager"
        "kube-api-server"
        "service-accounts"
      )
      root@jumpbox:~/kubernetes-the-hard-way# echo ${certs[*]}
      node-0 node-1 kube-proxy kube-scheduler kube-controller-manager kube-api-server service-accounts
    
      root@jumpbox:~/kubernetes-the-hard-way# for i in ${certs[*]}; do
        openssl genrsa -out "${i}.key" 4096
    
        openssl req -new -key "${i}.key" -sha256 \
          -config "ca.conf" -section ${i} \
          -out "${i}.csr"
    
        openssl x509 -req -days 3653 -in "${i}.csr" \
          -copy_extensions copyall \
          -sha256 -CA "ca.crt" \
          -CAkey "ca.key" \
          -CAcreateserial \
          -out "${i}.crt"
      done
      Certificate request self-signature ok
      subject=CN = system:node:node-0, O = system:nodes, C = US, ST = Washington, L = Seattle
      Certificate request self-signature ok
      subject=CN = system:node:node-1, O = system:nodes, C = US, ST = Washington, L = Seattle
      Certificate request self-signature ok
      subject=CN = system:kube-proxy, O = system:node-proxier, C = US, ST = Washington, L = Seattle
      Certificate request self-signature ok
      subject=CN = system:kube-scheduler, O = system:kube-scheduler, C = US, ST = Washington, L = Seattle
      Certificate request self-signature ok
      subject=CN = system:kube-controller-manager, O = system:kube-controller-manager, C = US, ST = Washington, L = Seattle
      Certificate request self-signature ok
      subject=CN = kubernetes, C = US, ST = Washington, L = Seattle
      Certificate request self-signature ok
      subject=CN = service-accounts
    • 참고 ) kind k8s 인증서 보기

        (⎈|kind-myk8s:default) zosys@4:~$ docker exec -it myk8s-control-plane ls -al /etc/kubernetes
        total 60
        drwxr-xr-x 1 root root 4096 Jan  5 13:52 .
        drwxr-xr-x 1 root root 4096 Jan  5 13:52 ..
        -rw------- 1 root root 5643 Jan  5 13:52 admin.conf
        -rw------- 1 root root 5658 Jan  5 13:52 controller-manager.conf
        -rw------- 1 root root 2007 Jan  5 13:52 kubelet.conf
        drwxr-xr-x 1 root root 4096 Jan  5 13:52 manifests
        drwxr-xr-x 3 root root 4096 Jan  5 13:52 pki
        -rw------- 1 root root 5602 Jan  5 13:52 scheduler.conf
        -rw------- 1 root root 5663 Jan  5 13:52 super-admin.conf
        (⎈|kind-myk8s:default) zosys@4:~$ docker exec -it myk8s-control-plane ls -al /var/lib/kubelet/pki
        total 20
        drwxr-xr-x 2 root root 4096 Jan  5 13:52 .
        drwx------ 9 root root 4096 Jan  5 13:52 ..
        -rw------- 1 root root 2839 Jan  5 13:52 kubelet-client-2026-01-05-13-52-31.pem
        lrwxrwxrwx 1 root root   59 Jan  5 13:52 kubelet-client-current.pem -> /var/lib/kubelet/pki/kubelet-client-2026-01-05-13-52-31.pem
        -rw-r--r-- 1 root root 2343 Jan  5 13:52 kubelet.crt
        -rw------- 1 root root 1679 Jan  5 13:52 kubelet.key
      
        ## API서버에 Alternative Name확인
        (⎈|kind-myk8s:default) zosys@4:~$ docker exec -i myk8s-control-plane cat /etc/kubernetes/pki/apiserver.crt | openssl x509 -text -noout
        Certificate:
            Data:
                Version: 3 (0x2)
                Serial Number: 8699350173748144468 (0x78ba4ce452fda954)
                Signature Algorithm: sha256WithRSAEncryption
                Issuer: CN = kubernetes
                Validity
                    Not Before: Jan  5 13:47:29 2026 GMT
                    Not After : Jan  5 13:52:29 2027 GMT
                Subject: CN = kube-apiserver
                Subject Public Key Info:
                    Public Key Algorithm: rsaEncryption
                        Public-Key: (2048 bit)
                        Modulus:
                            00:a5:26:b2:7b:33:e3:a8:c6:01:d5:ba:26:ba:e9:
                            2b:70:58:c1:0b:e3:35:a3:96:d1:de:c5:7e:80:44:
                            0b:61:af:12:1a:dd:e5:83:ed:88:bb:be:c7:c0:6f:
                            05:71:9b:4b:82:49:12:23:a0:46:44:91:ef:68:49:
                            12:45:26:2a:07:28:38:bd:33:c0:76:61:cb:51:af:
                            18:c9:4c:96:a6:db:98:e0:8c:82:50:2f:8a:3e:ed:
                            79:f3:d7:b6:89:45:9e:d2:fb:2c:0a:b2:1f:14:fa:
                            fa:f1:29:cb:5c:2b:d2:26:81:50:e7:0f:98:57:9c:
                            20:90:89:d3:d1:7b:d7:2f:c7:a6:a3:aa:b0:9b:f8:
                            78:c4:57:73:fb:82:a8:9d:1f:c6:c6:38:67:24:49:
                            4f:0f:cb:d7:61:f6:5d:0c:89:cf:b8:01:c6:af:af:
                            51:91:12:b8:57:e0:ab:13:30:c7:a5:1f:a8:24:49:
                            85:1e:e1:8c:d1:19:f8:68:2f:be:b3:eb:37:79:e5:
                            5f:b1:85:78:9e:05:a3:dd:b2:c2:92:03:1f:e1:a3:
                            39:f8:b5:9f:23:b2:b2:1a:c4:05:3a:3e:6c:17:3f:
                            86:94:47:b6:a3:36:87:3e:59:3c:40:06:25:11:a3:
                            26:8f:02:da:cd:c7:00:d0:ca:db:71:75:41:a6:f3:
                            5f:03
                        Exponent: 65537 (0x10001)
                X509v3 extensions:
                    X509v3 Key Usage: critical
                        Digital Signature, Key Encipherment
                    X509v3 Extended Key Usage:
                        TLS Web Server Authentication
                    X509v3 Basic Constraints: critical
                        CA:FALSE
                    X509v3 Authority Key Identifier:
                        F3:A5:5A:DF:0E:70:F9:F5:ED:2C:DC:76:A8:34:22:CF:A4:3A:64:F1
                    X509v3 Subject Alternative Name:
                        DNS:**kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, DNS:localhost, DNS:myk8s-control-plane, IP Address:10.96.0.1, IP Address:172.18.0.3, IP Address:127.0.0.1**
            Signature Algorithm: sha256WithRSAEncryption
            Signature Value:
                5c:9e:b9:6f:e7:42:51:84:81:e0:f2:82:be:f8:d9:07:62:8b:
                51:23:0c:56:8f:f0:7a:43:e5:d7:93:b6:2a:0b:ba:98:55:9b:
                81:fd:2a:52:0a:e1:7d:7a:ec:bb:02:dd:d1:72:64:28:ba:d0:
                5a:50:4c:ca:f0:c4:3b:13:c7:9f:04:df:d5:5d:6f:9b:d7:bf:
                18:c5:b4:a3:7c:af:b5:bb:ae:ad:b3:c9:88:ca:6d:25:6c:86:
                5f:c8:d6:cb:ae:fa:2a:d1:ba:43:04:68:7f:78:78:75:9e:9a:
                54:cb:1d:00:f8:f8:91:9e:4b:2c:cb:bd:b7:15:51:1c:c5:80:
      
        (⎈|kind-myk8s:default) zosys@4:~$ kubectl describe pod -n kube-system kube-apiserver-myk8s-control-plane
        Name:                 kube-apiserver-myk8s-control-plane
        Namespace:            kube-system
        Priority:             2000001000
        #####################중략 후 인증서정보 확인############################ 
              --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
              --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
              --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
              --etcd-servers=https://127.0.0.1:2379
              --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
              --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
              --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
              --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
              --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
              --requestheader-allowed-names=front-proxy-client
              --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
      
  • Distribute the Client and Server Certificates(인증서 배포)

      root@jumpbox:~/kubernetes-the-hard-way# for host in node-0 node-1; do
        ssh root@${host} mkdir /var/lib/kubelet/
    
        scp ca.crt root@${host}:/var/lib/kubelet/
    
        scp ${host}.crt \
          root@${host}:/var/lib/kubelet/kubelet.crt
    
        scp ${host}.key \
          root@${host}:/var/lib/kubelet/kubelet.key
      done
      ca.crt                                                                                100% 1899     1.3MB/s   00:00
      node-0.crt                                                                            100% 2147     1.1MB/s   00:00
      node-0.key                                                                            100% 3272     1.9MB/s   00:00
      ca.crt                                                                                100% 1899     1.4MB/s   00:00
      node-1.crt                                                                            100% 2147     1.1MB/s   00:00
      node-1.key                                                                            100% 3268     2.0MB/s   00:00
      root@jumpbox:~/kubernetes-the-hard-way# ssh node-0 ls -l /var/lib/kubelet
      ssh node-1 ls -l /var/lib/kubelet
      total 12
      -rw-r--r-- 1 root root 1899 Jan  6 00:37 ca.crt
      -rw-r--r-- 1 root root 2147 Jan  6 00:37 kubelet.crt
      -rw------- 1 root root 3272 Jan  6 00:37 kubelet.key
      total 12
      -rw-r--r-- 1 root root 1899 Jan  6 00:37 ca.crt
      -rw-r--r-- 1 root root 2147 Jan  6 00:37 kubelet.crt
      -rw------- 1 root root 3268 Jan  6 00:37 kubelet.key
    
      root@jumpbox:~/kubernetes-the-hard-way# scp \
        ca.key ca.crt \
        kube-api-server.key kube-api-server.crt \
        service-accounts.key service-accounts.crt \
        root@server:~/
      ca.key                                                                                100% 3272     1.3MB/s   00:00
      ca.crt                                                                                100% 1899     1.4MB/s   00:00
      kube-api-server.key                                                                   100% 3272     2.4MB/s   00:00
      kube-api-server.crt                                                                   100% 2354     1.8MB/s   00:00
      service-accounts.key                                                                  100% 3268     2.2MB/s   00:00
      service-accounts.crt                                                                  100% 2004     1.4MB/s   00:00
      root@jumpbox:~/kubernetes-the-hard-way# ssh server ls -l /root
      total 24
      -rw-r--r-- 1 root root 1899 Jan  6 00:38 ca.crt
      -rw------- 1 root root 3272 Jan  6 00:38 ca.key
      -rw-r--r-- 1 root root 2354 Jan  6 00:38 kube-api-server.crt
      -rw------- 1 root root 3272 Jan  6 00:38 kube-api-server.key
      -rw-r--r-- 1 root root 2004 Jan  6 00:38 service-accounts.crt
      -rw------- 1 root root 3268 Jan  6 00:38 service-accounts.key

05 - Generating Kubernetes Configuration Files for Authentication

  • The kubelet Kubernetes Configuration File
    kind환경의 kubelet설정 확인하기

      (⎈|kind-myk8s:default) zosys@4:~$ kubectl describe pod -n kube-system kube-apiserver-myk8s-control-plane
      Name:                 kube-apiserver-myk8s-control-plane
      Namespace:            kube-system
      Priority:             2000001000
      Priority Class Name:  system-node-critical
      Node:                 myk8s-control-plane/172.18.0.3
      Start Time:           Mon, 05 Jan 2026 22:52:38 +0900
      Labels:               component=kube-apiserver
                            tier=control-plane
      Annotations:          kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 172.18.0.3:6443
                            kubernetes.io/config.hash: 6dadff8c64cc62e7b3846733d2478bdd
                            kubernetes.io/config.mirror: 6dadff8c64cc62e7b3846733d2478bdd
                            kubernetes.io/config.seen: 2026-01-05T13:52:38.459844636Z
                            kubernetes.io/config.source: file
      Status:               Running
      SeccompProfile:       RuntimeDefault
      IP:                   172.18.0.3
      IPs:
        IP:           172.18.0.3
      Controlled By:  Node/myk8s-control-plane
      Containers:
        kube-apiserver:
          Container ID:  containerd://872e95b24c42f80855f00ffda199192af35f4b24ca9ee16587cfa03e13c692fe
          Image:         registry.k8s.io/kube-apiserver:v1.32.8
          Image ID:      sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1
          Port:          <none>
          Host Port:     <none>
          Command:
            kube-apiserver
            --advertise-address=172.18.0.3
            --allow-privileged=true
            --authorization-mode=Node,RBAC
    
      #kubeconfig 생성 및 확인
    
      root@jumpbox:~/kubernetes-the-hard-way# kubectl config set-cluster kubernetes-the-hard-way \
        --certificate-authority=ca.crt \
        --embed-certs=true \
        --server=https://server.kubernetes.local:6443 \
        --kubeconfig=node-0.kubeconfig && ls -l node-0.kubeconfig && cat node-0.kubeconfig
      Cluster "kubernetes-the-hard-way" set.
    
      #############################중략########################################
    
      root@jumpbox:~/kubernetes-the-hard-way# ls -l *.kubeconfig
      -rw------- 1 root root 10161 Jan  7 23:02 node-0.kubeconfig
      -rw------- 1 root root 10157 Jan  7 23:03 node-1.kubeconfig
    
      #kind의 설정확인
    
      (⎈|kind-myk8s:default) zosys@4:~$ cat .kube/config
      apiVersion: v1
      clusters:
          - cluster:
              certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJZGpzaENHRCtNU1l3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TmpBeE1EVXhNelEzTWpsYUZ3MHpOakF4TURNeE16VXlNamxhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUURVcGJGWEJyekgvNFc2MWtSMGpBcWtQanpINXhVSitUSHhvSTZTMEpENEdWS1lqaW1TRnplTm5mNFEKK3RjUEhrR0pVMklZMmJYRXQwTlE1bHo5TElkR3AxU3c2dXB0OXRpVHpCOHZlaGdvYmhmbjR1ZzdKc2l1UFNldwpYY2xnSkNjc0YvNDM5dGtrRVlETXVpSDFFVTZ6QXJ3YkJjSkVHWHk1Z3JrOHpQOG56N245U1UrSlNVOTR2S0RCCjFKb0hqS1doQnhtT3RiczJkSTByenFUcUVxRkRHWGo2b09HejdheGU4YUhBSDlVQ1IySUdFRnVyNHY3c1pTN2gKQ1AydllESEdCT1BXRS9vV20zSVJMZ0xVRnRzbVdSd0hqSTZWVmFyTlVqSXh2dzF0dFBlVDJSYUdRb0dJcExIUQpTdWFoT2NUV1VESkdnd2pCd3owSk80R2xGdnY1QWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJUenBWcmZEbkQ1OWUwczNIYW9OQ0xQcERwazhUQVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQmlEMWRkZDkwUQpaS2UvU2pNbTQwYVVtZlU2YVhFNG9DMndJT1RwMlkxNjBobGMrUXF6aUZEdktGY3k2MjVzWEpRcDRmb2lBajFtCmxkWGdLcTZTVDM1aVFqdGU0OGhYNG95bzNXQ1lYYXYvWkJDVmZoZ2pBTmlKelAwUTAzc2lTdVk2RlNxTDBHTysKZktncEVGS3luclFKdmZ6ckVmU3gzTWJqRTdYOEk5QVZmUTUxLzhFNEVyb1JQUzBVRXZGbGdiUCtwWWQveTZsVgpUTm8rMjFCN0V0OThhU2Y5V3lrQnZpMTZZeUhGckJPOUkwRGdtNlFxSVl0QXd6S1N5dmRCTitXVk12UitHSFBmCkVwY3VONUcyQ2x6dUd5aTJNU0VCRzhnRDNnL29uSjQwL3p3MHdTMU1JT1lweU5FMnhtNExhYll3c3dqOWVBZkQKQ0hkbndaOFdLMU15Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
              server: https://127.0.0.1:34991
            name: kind-myk8s
    
      #(참고) kind system:node관련 정보
      (⎈|kind-myk8s:default) zosys@4:~$ kubectl describe clusterroles system:node
      Name:         system:node
      Labels:       kubernetes.io/bootstrapping=rbac-defaults
      Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
      PolicyRule:
        Resources                                       Non-Resource URLs  Resource Names  Verbs
        ---------                                       -----------------  --------------  -----
        leases.coordination.k8s.io                      []                 []              [create delete get patch update]
        csinodes.storage.k8s.io                         []                 []              [create delete get patch update]
        nodes                                           []                 []              [create get list watch patch update]
        certificatesigningrequests.certificates.k8s.io  []                 []              [create get list watch]
        events                                          []                 []              [create patch update]
        pods/eviction                                   []                 []              [create]
        serviceaccounts/token                           []                 []              [create]
        tokenreviews.authentication.k8s.io              []                 []              [create]
        localsubjectaccessreviews.authorization.k8s.io  []                 []              [create]
        subjectaccessreviews.authorization.k8s.io       []                 []              [create]
        pods                                            []                 []              [get list watch create delete]
        configmaps                                      []                 []              [get list watch]
        secrets                                         []                 []              [get list watch]
        services                                        []                 []              [get list watch]
        runtimeclasses.node.k8s.io                      []                 []              [get list watch]
        csidrivers.storage.k8s.io                       []                 []              [get list watch]
        persistentvolumeclaims/status                   []                 []              [get patch update]
        endpoints                                       []                 []              [get]
        persistentvolumeclaims                          []                 []              [get]
        persistentvolumes                               []                 []              [get]
        volumeattachments.storage.k8s.io                []                 []              [get]
        nodes/status                                    []                 []              [patch update]
        pods/status                                     []                 []              [patch update]
    
      #####################################중략###########################################
    
  • kube-proxy configfile생성

      root@jumpbox:~/kubernetes-the-hard-way# cat kube-proxy.kubeconfig
      apiVersion: v1
      clusters
      - cluster:
          certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tL
          ######중략######
  • kube-controller-manager configfile 생성

      root@jumpbox:~/kubernetes-the-hard-way# cat kube-controller-manager.kubeconfig
      apiVersion: v1
      clusters:
      - cluster:
          certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0URS0
          ######중략######
  • kube-scheduler configfile생성

      root@jumpbox:~/kubernetes-the-hard-way# cat kube-scheduler.kubeconfig
      apiVersion: v1
      clusters:
      - cluster:
          certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUZURENDQXpTZ0F3SUJBZ0lVSWFXMFpIeWpQYWVrUDBBVUJla2x1U0dwZnI0d0RRWUpLb1pJaHZjTkFRRU4KQlFBd1FURUxNQWtHQTFVRUJoTUNWVk14RXpBUkJnTlZCQWdNQ2xkaGMyaHBibWQwYjI0eEVEQU9CZ05WQkFjTQpCMU5sWVhSMGJHVXhDekFKQmdOVkJBTU1Ba05CTUI0WERUSTJNREV3TlRFMU1EWXlOMW9YRFRNMk1ERXdOakUxCk1EWXlOMW93UVRFTE1Ba0dBMVVFQmhNQ1ZWTXhFekFSQmdOVkJBZ01DbGRoYzJocGJtZDBiMjR4RURBT0JnTl
    
          ######중략#######
  • admin configfile생성

      root@jumpbox:~/kubernetes-the-hard-way# cat admin.kubeconfig
      apiVersion: v1
      clusters:
      - cluster:
          certificate-authority-data: LS0tLS1CRUdJTiBDRV
    
          ######중략#########
  • Distribute the Kubernetes Configuration Files

      root@jumpbox:~/kubernetes-the-hard-way# ls -l *.kubeconfig
      -rw------- 1 root root  9953 Jan  7 23:12 admin.kubeconfig
      -rw------- 1 root root 10305 Jan  7 23:09 kube-controller-manager.kubeconfig
      -rw------- 1 root root 10187 Jan  7 23:08 kube-proxy.kubeconfig
      -rw------- 1 root root 10215 Jan  7 23:10 kube-scheduler.kubeconfig
      -rw------- 1 root root 10161 Jan  7 23:02 node-0.kubeconfig
      -rw------- 1 root root 10157 Jan  7 23:03 node-1.kubeconfig
    
      # 확인
      root@jumpbox:~/kubernetes-the-hard-way# ssh node-0 ls -l /var/lib/*/kubeconfig
      ssh node-1 ls -l /var/lib/*/kubeconfig
      -rw------- 1 root root 10161 Jan  7 23:13 /var/lib/kubelet/kubeconfig
      -rw------- 1 root root 10187 Jan  7 23:13 /var/lib/kube-proxy/kubeconfig
      -rw------- 1 root root 10157 Jan  7 23:13 /var/lib/kubelet/kubeconfig
      -rw------- 1 root root 10187 Jan  7 23:13 /var/lib/kube-proxy/kubeconfig
      ssh server ls -l /root/*.kubeconfig

06 - Generating the Data Encryption Config and Key

root@jumpbox:~/kubernetes-the-hard-way# cat configs/encryption-config.yaml
kind: EncryptionConfiguration
apiVersion: apiserver.config.k8s.io/v1
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: ${ENCRYPTION_KEY}
      - identity: {}


  root@jumpbox:~/kubernetes-the-hard-way# envsubst < configs/encryption-config.yaml > encryption-config.yaml
root@jumpbox:~/kubernetes-the-hard-way# cat encryption-config.yaml
kind: EncryptionConfiguration
apiVersion: apiserver.config.k8s.io/v1
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret:
      - identity: {}    

root@jumpbox:~/kubernetes-the-hard-way# scp encryption-config.yaml root@server:~/
encryption-config.yaml                                                                                  100%  227   183.1KB/s   00:00
root@jumpbox:~/kubernetes-the-hard-way# ssh server ls -l /root/encryption-config.yaml
-rw-r--r-- 1 root root 227 Jan  7 23:26 /root/encryption-config.yaml

07 - Bootstrapping the etcd Cluster

##etcd 기동
root@jumpbox:~/kubernetes-the-hard-way# cat units/etcd.service | grep controller
  --name controller \
  --initial-cluster controller=http://127.0.0.1:2380 \

  root@jumpbox:~/kubernetes-the-hard-way# ETCD_NAME=server
cat > units/etcd.service <<EOF
[Unit]
Description=etcd
Documentation=https://github.com/etcd-io/etcd

[Service]
Type=notify
ExecStart=/usr/local/bin/etcd \\
  --name ${ETCD_NAME} \\
  --initial-advertise-peer-urls http://127.0.0.1:2380 \\
  --listen-peer-urls http://127.0.0.1:2380 \\
  --listen-client-urls http://127.0.0.1:2379 \\
  --advertise-client-urls http://127.0.0.1:2379 \\
  --initial-cluster-token etcd-cluster-0 \\
  --initial-cluster ${ETCD_NAME}=http://127.0.0.1:2380 \\
  --initial-cluster-state new \\
  --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
root@jumpbox:~/kubernetes-the-hard-way# cat units/etcd.service | grep server
  --name server \
  --initial-cluster server=http://127.0.0.1:2380 \

root@jumpbox:~/kubernetes-the-hard-way# cat units/etcd.service | grep server
  --name server \
  --initial-cluster server=http://127.0.0.1:2380 \
root@jumpbox:~/kubernetes-the-hard-way# scp \
  downloads/controller/etcd \
  downloads/client/etcdctl \
  units/etcd.service \
  root@server:~/
etcd                                                                                                    100%   24MB  47.3MB/s   00:00
etcdctl                                                                                                 100%   16MB  63.4MB/s   00:00
etcd.service 
root@jumpbox:~/kubernetes-the-hard-way# ssh root@server

root@server:~# mv etcd etcdctl /usr/local/bin/
root@server:~# mkdir -p /etc/etcd /var/lib/etcd
root@server:~# chmod 700 /var/lib/etcd
root@server:~# cp ca.crt kube-api-server.key kube-api-server.crt /etc/etcd/
root@server:~# systemctl status etcd --no-pager
● etcd.service - etcd
     Loaded: loaded (/etc/systemd/system/etcd.service; enabled; preset: enabled)
     Active: active (running) since Wed 2026-01-07 23:29:51 KST; 4s ago
       Docs: https://github.com/etcd-io/etcd
   Main PID: 2607 (etcd)
      Tasks: 8 (limit: 2297)
     Memory: 10.4M
        CPU: 97ms            
 #######중략###########
 root@server:~# etcdctl member list
6702b0a34e2cfd39, started, server, http://127.0.0.1:2380, http://127.0.0.1:2379, false
root@server:~# etcdctl member list -w table
+------------------+---------+--------+-----------------------+-----------------------+------------+
|        ID        | STATUS  |  NAME  |      PEER ADDRS       |     CLIENT ADDRS      | IS LEARNER |
+------------------+---------+--------+-----------------------+-----------------------+------------+
| 6702b0a34e2cfd39 | started | server | http://127.0.0.1:2380 | http://127.0.0.1:2379 |      false |
+------------------+---------+--------+-----------------------+-----------------------+------------+
root@server:~# etcdctl endpoint status -w table
+----------------+------------------+------------+-----------------+---------+--------+-----------------------+-------+-----------+------------+-----------+------------+--------------------+--------+--------------------------+-------------------+
|    ENDPOINT    |        ID        |  VERSION   | STORAGE VERSION | DB SIZE | IN USE | PERCENTAGE NOT IN USE | QUOTA | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS | DOWNGRADE TARGET VERSION | DOWNGRADE ENABLED |
+----------------+------------------+------------+-----------------+---------+--------+-----------------------+-------+-----------+------------+-----------+------------+--------------------+--------+--------------------------+-------------------+
| 127.0.0.1:2379 | 6702b0a34e2cfd39 | 3.6.0-rc.3 |           3.6.0 |   20 kB |  16 kB |                   20% |   0 B |      true |      false |         2 |          4 |                  4 |        |                          |             false |
+----------------+------------------+------------+-----------------+---------+--------+-----------------------+-------+-----------+------------+-----------+------------+--------------------+--------+--------------------------+-------------------+

08 - Bootstrapping the Kubernetes Control Plane

  • 설정파일 작성 및 server전달

      root@jumpbox:~/kubernetes-the-hard-way# cat ca.conf | grep '\[kube-api-server_alt_names' -A2
      [kube-api-server_alt_names]
      IP.0  = 127.0.0.1
      IP.1  = 10.32.0.1
    
      root@jumpbox:~/kubernetes-the-hard-way# cat units/kube-apiserver.service
      [Unit]
      Description=Kubernetes API Server
      Documentation=https://github.com/kubernetes/kubernetes
    
      [Service]
      ExecStart=/usr/local/bin/kube-apiserver \
        --allow-privileged=true \
        --apiserver-count=1 \
        --audit-log-maxage=30 \
        --audit-log-maxbackup=3 \
        --audit-log-maxsize=100 \
        --audit-log-path=/var/log/audit.log \
        --authorization-mode=Node,RBAC \
    
        ##########중략################
    
        root@jumpbox:~/kubernetes-the-hard-way# cat configs/kube-apiserver-to-kubelet.yaml ; echo
      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRole
      metadata:
        annotations:
          rbac.authorization.kubernetes.io/autoupdate: "true"
        labels:
          kubernetes.io/bootstrapping: rbac-defaults
        name: system:kube-apiserver-to-kubelet
      rules:
        - apiGroups:
            - ""
    
       #apiserver subject CN확인
       root@jumpbox:~/kubernetes-the-hard-way# openssl x509 -in kube-api-server.crt -text -noout
      Certificate:
          Data:
              Version: 3 (0x2)
              Serial Number:
                  36:ea:c4:ee:d2:b3:c2:0c:0d:4c:4a:26:a3:56:3b:e1:a2:22:fa:cd
              Signature Algorithm: sha256WithRSAEncryption
              Issuer: C = US, ST = Washington, L = Seattle, CN = CA
              Validity
                  Not Before: Jan  5 15:25:45 2026 GMT
                  Not After : Jan  6 15:25:45 2036 GMT
              **Subject: CN = kubernetes, C = US, ST = Washington, L = Seattle**
      ####중략#####
    
      # kube-scheduler
      root@jumpbox:~/kubernetes-the-hard-way# cat units/kube-scheduler.service ; echo
      [Unit]
      Description=Kubernetes Scheduler
      Documentation=https://github.com/kubernetes/kubernetes
    
      [Service]
      ExecStart=/usr/local/bin/kube-scheduler \
        --config=/etc/kubernetes/config/kube-scheduler.yaml \
        --v=2
      Restart=on-failure
      RestartSec=5
    
      [Install]
      WantedBy=multi-user.target
      root@jumpbox:~/kubernetes-the-hard-way# cat configs/kube-scheduler.yaml ; echo
      apiVersion: kubescheduler.config.k8s.io/v1
      kind: KubeSchedulerConfiguration
      clientConnection:
        kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
      leaderElection:
        leaderElect: true
    
        root@jumpbox:~/kubernetes-the-hard-way# cat units/kube-controller-manager.service ; echo
      [Unit]
      Description=Kubernetes Controller Manager
      Documentation=https://github.com/kubernetes/kubernetes
    
      [Service]
      ExecStart=/usr/local/bin/kube-controller-manager \
        --bind-address=0.0.0.0 \
        --cluster-cidr=10.200.0.0/16 \
        --cluster-name=kubernetes \
        --cluster-signing-cert-file=/var/lib/kubernetes/ca.crt \
        --cluster-signing-key-file=/var/lib/kubernetes/ca.key \
        --kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \
        --root-ca-file=/var/lib/kubernetes/ca.crt \
        --service-account-private-key-file=/var/lib/kubernetes/service-accounts.key \
        --service-cluster-ip-range=10.32.0.0/24 \
        --use-service-account-credentials=true \
        --v=2
      Restart=on-failure
      RestartSec=5
    
      [Install]
      WantedBy=multi-user.target
      #SCP전송후 확인
      root@jumpbox:~/kubernetes-the-hard-way# ssh server ls -l /root
      total 295436
      -rw------- 1 root root     9953 Jan  7 23:14 admin.kubeconfig
      -rw-r--r-- 1 root root     1899 Jan  6 00:38 ca.crt
      -rw------- 1 root root     3272 Jan  6 00:38 ca.key
      -rw-r--r-- 1 root root      227 Jan  7 23:26 encryption-config.yaml
      -rwxr-xr-x 1 root root 93261976 Jan  7 23:38 kube-apiserver
      -rw-r--r-- 1 root root     2354 Jan  6 00:38 kube-api-server.crt
      -rw------- 1 root root     3272 Jan  6 00:38 kube-api-server.key
      -rw-r--r-- 1 root root     1442 Jan  7 23:38 kube-apiserver.service
      -rw-r--r-- 1 root root      727 Jan  7 23:38 kube-apiserver-to-kubelet.yaml
      -rwxr-xr-x 1 root root 85987480 Jan  7 23:38 kube-controller-manager
      -rw------- 1 root root    10305 Jan  7 23:14 kube-controller-manager.kubeconfig
      -rw-r--r-- 1 root root      735 Jan  7 23:38 kube-controller-manager.service
      -rwxr-xr-x 1 root root 57323672 Jan  7 23:38 kubectl
      -rwxr-xr-x 1 root root 65843352 Jan  7 23:38 kube-scheduler
      -rw------- 1 root root    10215 Jan  7 23:14 kube-scheduler.kubeconfig
      -rw-r--r-- 1 root root      281 Jan  7 23:38 kube-scheduler.service
      -rw-r--r-- 1 root root      191 Jan  7 23:38 kube-scheduler.yaml
      -rw-r--r-- 1 root root     2004 Jan  6 00:38 service-accounts.crt
      -rw------- 1 root root     3268 Jan  6 00:38 service-accounts.key
  • Provision the Kubernetes Control Plane : kubectl 확인

      root@server:~# mkdir -p /etc/kubernetes/config
      root@server:~# mv kube-apiserver \
        kube-controller-manager \
        kube-scheduler kubectl \
        /usr/local/bin/
      root@server:~# ls -l /usr/local/bin/kube-*
      -rwxr-xr-x 1 root root 93261976 Jan  7 23:38 /usr/local/bin/kube-apiserver
      -rwxr-xr-x 1 root root 85987480 Jan  7 23:38 /usr/local/bin/kube-controller-manager
      -rwxr-xr-x 1 root root 65843352 Jan  7 23:38 /usr/local/bin/kube-scheduler
    
      # Configure the Kubernetes API Server 확인
    
      root@server:~# mkdir -p /var/lib/kubernetes/
      root@server:~# mv ca.crt ca.key \
        kube-api-server.key kube-api-server.crt \
        service-accounts.key service-accounts.crt \
        encryption-config.yaml \
        /var/lib/kubernetes/
      root@server:~# ls -l /var/lib/kubernetes/
      total 28
      -rw-r--r-- 1 root root 1899 Jan  6 00:38 ca.crt
      -rw------- 1 root root 3272 Jan  6 00:38 ca.key
      -rw-r--r-- 1 root root  227 Jan  7 23:26 encryption-config.yaml
      -rw-r--r-- 1 root root 2354 Jan  6 00:38 kube-api-server.crt
      -rw------- 1 root root 3272 Jan  6 00:38 kube-api-server.key
      -rw-r--r-- 1 root root 2004 Jan  6 00:38 service-accounts.crt
      -rw------- 1 root root 3268 Jan  6 00:38 service-accounts.key
    
      #파일 이동및 실행
      root@server:~# mv kube-controller-manager.kubeconfig /var/lib/kubernetes/
      root@server:~# mv kube-controller-manager.service /etc/systemd/system/
      root@server:~# mv kube-scheduler.kubeconfig /var/lib/kubernetes/
      root@server:~# mv kube-scheduler.yaml /etc/kubernetes/config/
      root@server:~# mv kube-scheduler.service /etc/systemd/system/
      root@server:~# systemctl daemon-reload
      root@server:~# systemctl enable kube-apiserver kube-controller-manager kube-scheduler
      Created symlink /etc/systemd/system/multi-user.target.wants/kube-apiserver.service → /etc/systemd/system/kube-apiserver.service.
      Created symlink /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service → /etc/systemd/system/kube-controller-manager.service.
      Created symlink /etc/systemd/system/multi-user.target.wants/kube-scheduler.service → /etc/systemd/system/kube-scheduler.service.
      root@server:~# systemctl start  kube-apiserver kube-controller-manager kube-scheduler
    
      root@server:~# ss -tlp | grep kube
      LISTEN 0      4096               *:6443              *:*    users:(("kube-apiserver",pid=4929,fd=3))
      LISTEN 0      4096               *:10259             *:*    users:(("kube-scheduler",pid=2752,fd=3))
      LISTEN 0      4096               *:10257             *:*    users:(("kube-controller",pid=2751,fd=3))
    
      root@server:~# kubectl get clusterroles --kubeconfig admin.kubeconfig
      NAME                                                                   CREATED AT
      admin                                                                  2026-01-07T15:07:34Z
      cluster-admin                                                          2026-01-07T15:07:34Z
      edit                                                                   2026-01-07T15:07:34Z
      system:aggregate-to-admin                                              2026-01-07T15:07:34Z
      system:aggregate-to-edit                                               2026-01-07T15:07:34Z
      system:aggregate-to-view                                               2026-01-07T15:07:34Z
      system:auth-delegator                                                  2026-01-07T15:07:34Z
      system:basic-user                                                      2026-01-07T15:07:34Z
      system:certificates.k8s.io:certificatesigningrequests:nodeclient       2026-01-07T15:07:34Z
      system:certificates.k8s.io:certificatesigningrequests:selfnodeclient   2026-01-07T15:07:34Z
      system:certificates.k8s.io:kube-apiserver-client-approver              2026-01-07T15:07:34Z
  • RBAC for Kubelet Authorization

      root@server:~# cat kube-apiserver-to-kubelet.yaml
      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRole
      metadata:
        annotations:
          rbac.authorization.kubernetes.io/autoupdate: "true"
        labels:
          kubernetes.io/bootstrapping: rbac-defaults
        name: system:kube-apiserver-to-kubelet
      rules:
        - apiGroups:
            - ""
          resources:
            - nodes/proxy
            - nodes/stats
            - nodes/log
            - nodes/spec
            - nodes/metrics
          verbs:
            - "*"
    
    
root@server:~# kubectl get clusterroles system:kube-apiserver-to-kubelet --kubeconfig admin.kubeconfig
NAME                               CREATED AT
system:kube-apiserver-to-kubelet   2026-01-07T15:10:03Z
root@server:~# kubectl get clusterrolebindings system:kube-apiserver --kubeconfig admin.kubeconfig

NAME                    ROLE                                           AGE
system:kube-apiserver   ClusterRole/system:kube-apiserver-to-kubelet   18s      
```

- jumpbox 서버에서 컨트롤플레인 확인

```bash
root@jumpbox:~/kubernetes-the-hard-way# curl -s -k --cacert ca.crt https://server.kubernetes.local:6443/version | jq
{
  "major": "1",
  "minor": "32",
  "gitVersion": "v1.32.3",
  "gitCommit": "32cc146f75aad04beaaa245a7157eb35063a9f99",
  "gitTreeState": "clean",
  "buildDate": "2025-03-11T19:52:21Z",
  "goVersion": "go1.23.6",
  "compiler": "gc",
  "platform": "linux/amd64"
}
```

09 - Bootstrapping the Kubernetes Worker Nodes

  • 워커노드세팅을 위한 사전준비

      root@jumpbox:~/kubernetes-the-hard-way# cat configs/10-bridge.conf | jq
      {
        "cniVersion": "1.0.0",
        "name": "bridge",
        "type": "bridge",
        "bridge": "cni0",
        "isGateway": true,
        "ipMasq": true,
        "ipam": {
          "type": "host-local",
          "ranges": [
            [
              {
                "subnet": "SUBNET"
              }
            ]
          ],
          "routes": [
            {
              "dst": "0.0.0.0/0"
            }
          ]
        }
      }
    
      root@jumpbox:~/kubernetes-the-hard-way# cat configs/kubelet-config.yaml | yq
      {
        "kind": "KubeletConfiguration",
        "apiVersion": "kubelet.config.k8s.io/v1beta1",
        "address": "0.0.0.0",
        "authentication": {
          "anonymous": {
            "enabled": false
          },
          "webhook": {
            "enabled": true
          },
      #######중략#############
    
      #파일전달
      root@jumpbox:~/kubernetes-the-hard-way# ssh node-0 ls -l /root
      ssh node-1 ls -l /root
      total 8
      -rw-r--r-- 1 root root 265 Jan  8 00:18 10-bridge.conf
      -rw-r--r-- 1 root root 610 Jan  8 00:18 kubelet-config.yaml
      total 8
      -rw-r--r-- 1 root root 265 Jan  8 00:18 10-bridge.conf
      -rw-r--r-- 1 root root 610 Jan  8 00:18 kubelet-config.yaml
    
      root@jumpbox:~/kubernetes-the-hard-way# cat configs/99-loopback.conf ; echo
      cat configs/containerd-config.toml ; echo
      cat configs/kube-proxy-config.yaml ; echo
      {
        "cniVersion": "1.1.0",
        "name": "lo",
        "type": "loopback"
      }
      version = 2
    
      [plugins."io.containerd.grpc.v1.cri"]
        [plugins."io.containerd.grpc.v1.cri".containerd]
          snapshotter = "overlayfs"
          default_runtime_name = "runc"
        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
          runtime_type = "io.containerd.runc.v2"
        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
          SystemdCgroup = true
      [plugins."io.containerd.grpc.v1.cri".cni]
        bin_dir = "/opt/cni/bin"
        conf_dir = "/etc/cni/net.d"
      kind: KubeProxyConfiguration
      apiVersion: kubeproxy.config.k8s.io/v1alpha1
      clientConnection:
        kubeconfig: "/var/lib/kube-proxy/kubeconfig"
      mode: "iptables"
      clusterCIDR: "10.200.0.0/16"
    
      #파일전송 및 확인
      root@jumpbox:~/kubernetes-the-hard-way# ssh node-0 ls -l /root
      ssh node-1 ls -l /root
      ssh node-0 ls -l /root/cni-plugins
      ssh node-1 ls -l /root/cni-plugins
      total 358584
      -rw-r--r-- 1 root root      265 Jan  8 00:18 10-bridge.conf
      -rw-r--r-- 1 root root       65 Jan  8 00:18 99-loopback.conf
      drwxr-xr-x 2 root root     4096 Jan  8 00:19 cni-plugins
      -rwxr-xr-x 1 root root 58584656 Jan  8 00:18 containerd
      -rw-r--r-- 1 root root      470 Jan  8 00:18 containerd-config.toml
    
      #########중략##########
    
  • node0 세팅

      root@jumpbox:~/kubernetes-the-hard-way# ssh root@node-0
    
      root@node-0:~# mkdir -p \
        /etc/cni/net.d \
        /opt/cni/bin \
        /var/lib/kubelet \
        /var/lib/kube-proxy \
        /var/lib/kubernetes \
        /var/run/kubernetes
    
      root@node-0:~# mv 10-bridge.conf 99-loopback.conf /etc/cni/net.d/
      cat /etc/cni/net.d/10-bridge.conf
      {
        "cniVersion": "1.0.0",
        "name": "bridge",
        "type": "bridge",
        "bridge": "cni0",
        "isGateway": true,
        "ipMasq": true,
        "ipam": {
          "type": "host-local",
          "ranges": [
            [{"subnet": "10.200.0.0/24"}]
          ],
          "routes": [{"dst": "0.0.0.0/0"}]
        }
      }root@node-0:~#lsmod | grep netfilterr
      modprobe br-netfilter
      echo "br-netfilter" >> /etc/modules-load.d/modules.conf
      lsmod | grep netfilter
      br_netfilter           36864  0
      bridge                319488  1 br_netfilter
    
      root@node-0:~# mkdir -p /etc/containerd/
      mv containerd-config.toml /etc/containerd/config.toml
      mv containerd.service /etc/systemd/system/
      cat /etc/containerd/config.toml ; echo
      version = 2
    
      [plugins."io.containerd.grpc.v1.cri"]
        [plugins."io.containerd.grpc.v1.cri".containerd]
          snapshotter = "overlayfs"
          default_runtime_name = "runc"
        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
          runtime_type = "io.containerd.runc.v2"
        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
          SystemdCgroup = true
      [plugins."io.containerd.grpc.v1.cri".cni]
        bin_dir = "/opt/cni/bin"
        conf_dir = "/etc/cni/net.d"
    
      ###############중략############################
    
      #node0세팅확인
      root@jumpbox:~/kubernetes-the-hard-way# ssh server "kubectl get nodes -owide --kubeconfig admin.kubeconfig"
      NAME     STATUS   ROLES    AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION   CONTAINER-RUNTIME
      node-0   Ready    <none>   19s   v1.32.3   192.168.10.101   <none>        Debian GNU/Linux 12 (bookworm)   6.1.0-40-amd64   containerd://2.1.0-beta.0
  • node1 세팅(node0과 동일하기에 결과값만 반환하고 세팅 생략)

      root@jumpbox:~/kubernetes-the-hard-way# ssh server "kubectl get nodes -owide --kubeconfig admin.kubeconfig"
      NAME     STATUS   ROLES    AGE    VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION   CONTAINER-RUNTIME
      node-0   Ready    <none>   2m7s   v1.32.3   192.168.10.101   <none>        Debian GNU/Linux 12 (bookworm)   6.1.0-40-amd64   containerd://2.1.0-beta.0
      node-1   Ready    <none>   7s     v1.32.3   192.168.10.102   <none>        Debian GNU/Linux 12 (bookworm)   6.1.0-40-amd64   containerd://2.1.0-beta.0

10 - Configuring kubectl for Remote Access

  • jumpbox에 admin자격증명 세팅진행(kubectl사용)

      root@jumpbox:~/kubernetes-the-hard-way# curl -s --cacert ca.crt https://server.kubernetes.local:6443/version | jq
      {
        "major": "1",
        "minor": "32",
        "gitVersion": "v1.32.3",
        "gitCommit": "32cc146f75aad04beaaa245a7157eb35063a9f99",
        "gitTreeState": "clean",
        "buildDate": "2025-03-11T19:52:21Z",
        "goVersion": "go1.23.6",
        "compiler": "gc",
        "platform": "linux/amd64"
      }
    
      root@jumpbox:~/kubernetes-the-hard-way# kubectl config set-cluster kubernetes-the-hard-way \
        --certificate-authority=ca.crt \
        --embed-certs=true \
        --server=https://server.kubernetes.local:6443
      Cluster "kubernetes-the-hard-way" set.
      root@jumpbox:~/kubernetes-the-hard-way# kubectl config set-credentials admin \
        --client-certificate=admin.crt \
        --client-key=admin.key
      User "admin" set.
      root@jumpbox:~/kubernetes-the-hard-way# kubectl config set-credentials admin \
        --client-certificate=admin.crt \
        --client-key=admin.key
      User "admin" set.
      root@jumpbox:~/kubernetes-the-hard-way# kubectl config set-context kubernetes-the-hard-way \
        --cluster=kubernetes-the-hard-way \
        --user=admin
      Context "kubernetes-the-hard-way" created.
      root@jumpbox:~/kubernetes-the-hard-way# kubectl config use-context kubernetes-the-hard-way
      Switched to context "kubernetes-the-hard-way".
    
      root@jumpbox:~/kubernetes-the-hard-way# kubectl version
      Client Version: v1.32.3
      Kustomize Version: v5.5.0
      Server Version: v1.32.3

11 - Provisioning Pod Network Routes

  • node-0/1 pod간 통신을 위한 라우팅 수동설정진행

      root@jumpbox:~/kubernetes-the-hard-way# SERVER_IP=$(grep server machines.txt | cut -d " " -f 1)
      NODE_0_IP=$(grep node-0 machines.txt | cut -d " " -f 1)
      NODE_0_SUBNET=$(grep node-0 machines.txt | cut -d " " -f 4)
      NODE_1_IP=$(grep node-1 machines.txt | cut -d " " -f 1)
      NODE_1_SUBNET=$(grep node-1 machines.txt | cut -d " " -f 4)
      echo $SERVER_IP $NODE_0_IP $NODE_0_SUBNET $NODE_1_IP $NODE_1_SUBNET
      192.168.10.100 192.168.10.101 10.200.0.0/24 192.168.10.102 10.200.1.0/24
    
      root@jumpbox:~/kubernetes-the-hard-way# ssh server ip -c route
      default via 10.0.2.2 dev eth0
      10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15
      192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.100

12 - Smoke Test

  • Data encryption

      root@jumpbox:~/kubernetes-the-hard-way# kubectl create secret generic kubernetes-the-hard-way --from-literal="mykey=mydata"
      secret/kubernetes-the-hard-way created
    
      #정상 적용 확인
      root@jumpbox:~/kubernetes-the-hard-way# kubectl get secret kubernetes-the-hard-way -o jsonpath='{.data.mykey}' ; echo
      bXlkYXRh
    
      root@jumpbox:~/kubernetes-the-hard-way# ssh root@server \
          'etcdctl get /registry/secrets/default/kubernetes-the-hard-way | hexdump -C'
      00000000  2f 72 65 67 69 73 74 72  79 2f 73 65 63 72 65 74  |/registry/secret|
      00000010  73 2f 64 65 66 61 75 6c  74 2f 6b 75 62 65 72 6e  |s/default/kubern|
      00000020  65 74 65 73 2d 74 68 65  2d 68 61 72 64 2d 77 61  |etes-the-hard-wa|
      00000030  79 0a 6b 38 73 3a 65 6e  63 3a 61 65 73 63 62 63  |y.k8s:enc:aescbc|
      00000040  3a 76 31 3a 6b 65 79 31  3a 96 7c 14 e2 52 45 50  |:v1:key1:.|..REP|
      00000050  f0 6a a5 66 a0 ac 73 62  d2 d4 63 e0 96 7a d5 55  |.j.f..sb..c..z.U|
      00000060  ff a5 3d 3f 47 fc 53 c7  f3 90 de c2 bb e4 ae 60  |..=?G.S........`|
      00000070  d0 0f 1e 34 a5 a0 59 11  d4 f2 8a 32 21 85 5f f5  |...4..Y....2!._.|
      00000080  1c 01 f5 58 96 dc e1 cc  e4 1a 47 c7 48 26 f6 53  |...X......G.H&.S|
      00000090  b0 12 b4 8e f5 eb 8a 01  c6 a7 7c 67 78 57 9c e0  |..........|gxW..|
      000000a0  a1 06 84 67 8c 57 c4 a0  23 1c 6a d0 a2 62 8b 78  |...g.W..#.j..b.x|
      000000b0  8c dc fe 68 60 f8 8d 38  14 90 46 97 bc ae 4d d5  |...h`..8..F...M.|
      000000c0  37 76 8f a9 fd 74 b8 85  f0 09 8d d7 0c 61 3e e3  |7v...t.......a>.|
      000000d0  04 1a a8 99 80 15 45 7d  a5 41 b7 75 54 a6 e0 dc  |......E}.A.uT...|
      000000e0  0e 57 ae e7 3b 8b bd 1b  43 25 39 2e 04 4b 90 be  |.W..;...C%9..K..|
      000000f0  ab 3d d2 0c e7 9c 97 cf  2d 3d 2f 91 b9 f3 05 f6  |.=......-=/.....|
      00000100  3f 47 93 3a a8 dd e3 54  55 15 42 8a 39 45 cb 2b  |?G.:...TU.B.9E.+|
      00000110  c3 cb 2d bf df 5f 2b c4  12 58 38 11 73 6a c6 f8  |..-.._+..X8.sj..|
      00000120  f7 97 1b bd d3 e3 95 e1  f5 ef d1 fb 5e 4b 1b ab  |............^K..|
      00000130  36 22 7c 3d d0 e8 80 b4  4d 85 20 05 9f d4 c2 10  |6"|=....M. .....|
      00000140  96 23 c0 88 e3 a1 22 f7  61 cd 70 00 86 18 5c 24  |.#....".a.p...\$|
      00000150  9a a4 14 e3 4b 39 39 57  ee 0a                    |....K99W..|
      0000015a
  • Deployments , Port Forwarding , Log, Exec, Service(NodePort)

      root@jumpbox:~/kubernetes-the-hard-way# kubectl get pod
      kubectl create deployment nginx --image=nginx:latest
      kubectl scale deployment nginx --replicas=2
      kubectl get pod -owide
      No resources found in default namespace.
      deployment.apps/nginx created
      deployment.apps/nginx scaled
      NAME                     READY   STATUS              RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES
      nginx-54c98b4f84-dkn2w   0/1     ContainerCreating   0          0s    <none>   node-1   <none>           <none>
      nginx-54c98b4f84-kws4j   0/1     ContainerCreating   0          0s    <none>   node-0   <none>           <none>
      root@jumpbox:~/kubernetes-the-hard-way# kubectl get pod -owide
      NAME                     READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
      nginx-54c98b4f84-dkn2w   1/1     Running   0          44s   10.200.1.2   node-1   <none>           <none>
      nginx-54c98b4f84-kws4j   1/1     Running   0          44s   10.200.0.2   node-0   <none>           <none>
    
      # server 노드에서 파드 IP로 호출 확인
      root@jumpbox:~/kubernetes-the-hard-way# ssh server curl -s 10.200.1.2 | grep title
      <title>Welcome to nginx!</title>
      root@jumpbox:~/kubernetes-the-hard-way# ssh server curl -s 10.200.0.2 | grep title
      <title>Welcome to nginx!</title>
    
      root@jumpbox:~/kubernetes-the-hard-way# echo $POD_NAME
      nginx-54c98b4f84-dkn2w
      root@jumpbox:~/kubernetes-the-hard-way# kubectl port-forward $POD_NAME 8080:80 &
      [1] 3130
      root@jumpbox:~/kubernetes-the-hard-way# Forwarding from 127.0.0.1:8080 -> 80
      Forwarding from [::1]:8080 -> 80
    
  • 실습 환경 정리

      C:\Users\bom\Desktop\스터디\onpremisk8s>vagrant destroy -f
      ==> node-1: Forcing shutdown of VM...
      ==> node-1: Destroying VM and associated drives...
      ==> node-0: Forcing shutdown of VM...
      ==> node-0: Destroying VM and associated drives...
      ==> server: Forcing shutdown of VM...
      ==> server: Destroying VM and associated drives...
      ==> jumpbox: Forcing shutdown of VM...
      ==> jumpbox: Destroying VM and associated drives...
    
      C:\Users\bom\Desktop\스터디\onpremisk8s>rmdir /s /q .vagrant

'Study > K8S-Deploy' 카테고리의 다른 글

K8S)2주차 과제  (0) 2026.01.15

Taints + Tolerations + Node Affinity

개념 요약

  • Taints: 노드에 "오염(Taint)"을 적용해, 특정 조건의 Pod가 아니면 해당 노드에 스케줄링되지 않도록 제한합니다.
  • Tolerations: Pod가 특정 Taint를 "관용(Tolerate)"할 수 있도록 설정해, 해당 노드에 스케줄링될 수 있게 허용합니다.
  • Node Affinity: 노드에 설정된 Label 기반으로 Pod가 선호하거나 반드시 배치되어야 하는 노드를 지정합니다.

한계점

  1. Taints + Tolerations의 한계
    • Taint가 설정된 노드에 대해, Toleration이 없는 Pod는 스케줄링되지 않지만, Toleration이 있는 Pod는 Taint가 없는 일반 노드에도 스케줄링될 수 있습니다.
      • ❗즉, 특정 노드에만 배치되기를 원하지만, 다른 일반 노드에 배치될 수 있는 가능성 존재
      • 예: Toleration: color=blue인 Pod는 color=blue Taint가 있는 노드에도, 아무 설정 없는 노드에도 스케줄될 수 있음
  2. Node Affinity의 한계
    • Node Affinity는 특정 조건(label)에 맞는 노드에 스케줄을 유도할 수 있지만,
    • Affinity 조건이 없는 Pod는 해당 노드에 스케줄링이 가능함 (차단 기능이 아님)
    • 또한 preferredDuringScheduling는 단순 선호이므로 강제성이 없음

완벽한 제어 방법

  • 세 가지를 조합해야 특정 노드에만 특정 Pod가 스케줄되도록 완벽한 제어가 가능:

기능 역할

Taint (노드) 노드에 "이 Pod 아니면 거절" 조건 부여 (NoSchedule, PreferNoSchedule 등)
Toleration (Pod) Taint 조건을 만족하는 Pod만 스케줄 가능하게 허용
Node Affinity (Pod) 특정 라벨이 있는 노드에만 스케줄되도록 제어

✅ 예시 구성

  • 노드 설정:
    • node1 → Label: node-type=blue, Taint: key=color:blue:NoSchedule
  • Pod 설정:
    • Toleration: key=color, value=blue, effect=NoSchedule
    • Node Affinity: requiredDuringSchedulingIgnoredDuringExecution → node-type=blue

→ 이렇게 설정하면:

  • 해당 Pod는 node-type=blue인 노드에만 스케줄링되고,
  • color=blue Taint가 없으면 거절되므로,
  • 해당 조건을 모두 만족하는 노드에만 Pod가 배치됨

🔚 결론

  • Taints + Tolerations은 접근을 차단/허용하는 장치
  • Node Affinity배치 조건을 지정하는 장치
  • 이 둘은 상호보완적으로 사용되어야 완전한 스케줄링 제어가 가능

강의 회차 : #5.4 Inheritance ~#5.7 Code Challange

1. Inheritance(상속)

상속이란, 클래스의 Method들을 클래스간에 공유 하는 기능으로써 반복 작업을 제거해주는 객체지향프로그래밍의 중요한 기능중 하나이다. 각 Class에 대해 반복적인 프로퍼티가 있을경우 상속을 하여 반복작업을 제거 해준다.  아래와 같이 Class 선언시 상속 Class를 적어주면 된다. 

class ClassName(InheritanceClassName):

 

상속을 받은 Class에서 __init__Method가 없을경우 자동으로 부모Class의 init Method를 호출한다. 

init을 선언할때 super()를 통해 아래와  같이 부모class의 init method를 참조할수있다 

class Dog:
    def __init__(self, name,breed , age):
        self.name = name
        self.breed = breed
        self.age = age

class GuardDog(Dog):
    def __init__(self, name ,breed):
        super().__init__(name,breed,5)

    def rrrrr(self):
        print("stay away!")

class Puppy(Dog):
    def __init__(self,name,breed):
        super().__init__(name, breed, 0.1)

    def woofwoof(self):
        print("woof woof!")

ruffus = Puppy(
    name = "ruffus",
    breed = "beagle",

)

bibi = GuardDog(
     name = "bibi",
    breed = "dodog",
)

ruffus.woofwoof()

 

class Puppy와 GuardDog는 Dog 클래스를 참조하게 되어있고, name,breed,age를 상속 받는다. 그리고 각 GuardDog와 Puppy는 supper().__init__()을 통해서 age의 값을 따로 선언하게끔 되어있다

'Study > Python' 카테고리의 다른 글

Python 공부 8일차  (0) 2024.04.03
Python 공부 7일차  (0) 2024.03.26
Python 공부 6일차  (0) 2024.03.14
Python 공부 5일차  (0) 2024.03.13
Python 공부 4일차  (0) 2024.03.10

강의 회차 : #5.0 Introduction ~ #5.3 Methods

 

1. OOP(object oriented programming)

 

OOP는 데이터를 기반으로 동작하는 함수로 데이터를 캡슐화 할수있다.

OOP를 사용하면 다른 종류의 데이터와 함수들을 모두 하나의 파일에 가지고 있는 것보다 데이터를 더 구체적으로 구성할 수 있습니다. 이를 Box, Object, Bubble 등으로 부를 수 있습니다.

함수와 데이터구조(리스트,튜플,딕셔너리)들이 연결 관계에 있게끔 구성하는게 아닌,

각각의 우리의 데이터를 어떻게 구조화 해야하는지 알려주고, 그 데이터를 수정하기 위해 어떤 함수를 사용해야하는지 안내해주며, 이는 명확한 수준의 경계를 말해준다고 볼수있다. 

 

 

2. Class

Class란 데이터와 데이터를 처리하는 함수들을 함께 묶어놓은것이며 이를 통해 데이터 구조화를 도와준다. 

선언하는 방법은 아래와 같이 간단하다.

class Puppy:

 

3. Methods

간단히 정의하면 함수가 class 밖에 있으면 함수 안에 있으면 method다

method의 가장 중요한 규칙은 아래와 같다

클래스 내 어떠한 메쏘드던지 첫번째 아규먼트는 자동적으로 자기 자신(self)을 첫번째 아규먼트로 받는다.

그 예시를 보자면

class Puppy:

  def __init__(self, name, breed):
    self.name = name
    self.breed = breed
    
ruffus = Puppy("ruffus","beagle")

print(ruffus.name,ruffus.breed)

결과 값은 아래와 같다

 

즉, ruffus를 선언할때 Puppy Class를 호출하는데, Method내 첫번째 아규먼트인 self는 ruffus 자체가 된다는 의미이다.

 

 

 

'Study > Python' 카테고리의 다른 글

Python 공부 9일차  (0) 2024.04.28
Python 공부 7일차  (0) 2024.03.26
Python 공부 6일차  (0) 2024.03.14
Python 공부 5일차  (0) 2024.03.13
Python 공부 4일차  (0) 2024.03.10

+ Recent posts