RKE2란?
RKE2는 Rancher에서 개발한 엔터프라이즈급 Kubernetes 배포판으로,
특히 미국 연방 정부 수준의 보안 및 규정 준수를 목표로 설계된 보안 중심 K8S입니다.
- 보안 및 규정 준수
- 기본 설정만으로 CIS Kubernetes Benchmark v1.7 / v1.8 통과 가능
- FIPS 140-2 규정 준수 지원
- 빌드 파이프라인에서 Trivy 기반 CVE 정기 점검
- 아키텍처 특징
- 컨트롤 플레인 컴포넌트를 kubelet이 관리하는 Static Pod로 실행
- 기본 컨테이너 런타임은 containerd
- 불필요한 구성 요소를 제거한 하드닝된 Kubernetes
- 운영방식
- 단독 실행 가능
- Rancher 플랫폼과 통합 운영 가능
- 구성요소
- K8s
- API Server , Controller Manager , Scheduler
- Proxy , Kubelet
- etcd
- runc
- containerd/cri
- CNI: Canal
- CoreDNS
- Ingress NGINX Controller and/or Traefik
- Metrics Server
- Helm
- K8s
RKE2 실습
실습환경배포
PS C:\Users\bom\Desktop\스터디\7week\k8s-rke2> pwd
Path
----
C:\Users\bom\Desktop\스터디\7week\k8s-rke2
PS C:\Users\bom\Desktop\스터디\7week\k8s-rke2> vagrant up
Bringing machine 'k8s-node1' up with 'virtualbox' provider...
Bringing machine 'k8s-node2' up with 'virtualbox' provider...
==> k8s-node1: Preparing master VM for linked clones...
k8s-node1: This is a one time operation. Once the master VM is prepared,
k8s-node1: it will be used as a base for linked clones, making the creation
k8s-node1: of new VMs take milliseconds on a modern system.
==> k8s-node1: Importing base box 'bento/rockylinux-9'...
###################중략####################
PS C:\Users\bom\Desktop\스터디\7week\k8s-rke2> vagrant status
Current machine states:
k8s-node1 running (virtualbox)
k8s-node2 running (virtualbox)
설치
RKE 서버 노드 설치
PS C:\Users\bom\Desktop\스터디\7week\k8s-rke2> vagrant ssh k8s-node1 This system is built by the Bento project by Chef Software More information can be found at https://github.com/chef/bento Use of this system is acceptance of the OS vendor EULA and License Agreements. [root@k8s-node1 ~]# vi install.sh [root@k8s-node1 ~]# ll total 28 -rw-r--r--. 1 root root 25291 Feb 18 14:14 install.sh [root@k8s-node1 ~]# chmod +x install.sh [root@k8s-node1 ~]# INSTALL_RKE2_CHANNEL=v1.33 ./install.sh [root@k8s-node1 ~]# rke2 --version rke2 version v1.33.8+rke2r1 (eb75e3c1774cee5a584259d6fee77eb8cfa9b430) go version go1.24.12 X:boringcrypto [root@k8s-node1 ~]# dnf repolist repo id repo name appstream Rocky Linux 9 - AppStream baseos Rocky Linux 9 - BaseOS extras Rocky Linux 9 - Extras rancher-rke2-1.33-stable Rancher RKE2 1.33 (v1.33) rancher-rke2-common-stable Rancher RKE2 Common (v1.33) [root@k8s-node1 ~]# cat /etc/yum.repos.d/rancher-rke2.repo [rancher-rke2-common-stable] name=Rancher RKE2 Common (v1.33) baseurl=https://rpm.rancher.io/rke2/stable/common/centos/9/noarch enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://rpm.rancher.io/public.key [rancher-rke2-1.33-stable] name=Rancher RKE2 1.33 (v1.33) baseurl=https://rpm.rancher.io/rke2/stable/1.33/centos/9/x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://rpm.rancher.io/public.key [root@k8s-node1 ~]# cat /etc/rancher/rke2/config.yaml write-kubeconfig-mode: "0644" debug: true cni: canal bind-address: 192.168.10.11 advertise-address: 192.168.10.11 node-ip: 192.168.10.11 disable-cloud-controller: true disable: - servicelb - rke2-coredns-autoscaler - rke2-ingress-nginx - rke2-snapshot-controller - rke2-snapshot-controller-crd - rke2-snapshot-validation-webhook
#모니터링
Every 2.0s: pstree -a k8s-node1: Wed Feb 18 14:18:15 2026
systemd --switched-root --system --deserialize 31
|-NetworkManager --no-daemon
| `-2*[{NetworkManager}]
|-VBoxDRMClient
| `-4*[{VBoxDRMClient}]
|-VBoxService --pidfile /var/run/vboxadd-service.sh
| `-8*[{VBoxService}]
|-agetty -o -p -- \\u --noclear - linux
[root@k8s-node1 ~]# journalctl -u rke2-server -f
Feb 18 14:17:57 k8s-node1 systemd[1]: Starting Rancher Kubernetes Engine v2 (server)...
Feb 18 14:17:57 k8s-node1 rke2[6907]: time="2026-02-18T14:17:57+09:00" level=warning msg="not running in CIS mode"
Feb 18 14:17:57 k8s-node1 rke2[6907]: time="2026-02-18T14:17:57+09:00" level=info msg="Applying Pod Security Admission Configuration"
Feb 18 14:17:57 k8s-node1 rke2[6907]: time="2026-02-18T14:17:57+09:00" level=info msg="Starting rke2 v1.33.8+rke2r1 (eb75e3c1774cee5a584259d6fee77eb8cfa9b430)"
[root@k8s-node1 ~]# systemctl status rke2-server --no-pager
● rke2-server.service - Rancher Kubernetes Engine v2 (server)
Loaded: loaded (/usr/lib/systemd/system/rke2-server.service; enabled; preset: disabled)
Active: active (running) since Wed 2026-02-18 14:21:54 KST; 3s ago
Docs: https://github.com/rancher/rke2#readme
Process: 6905 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
Process: 6906 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
[root@k8s-node1 ~]# k get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-k8s-node1 1/1 Running 0 4m27s
kube-system helm-install-rke2-canal-4sndc 0/1 Completed 0 2m2s
kube-system helm-install-rke2-coredns-ctxlg 0/1 Completed 0 2m2s
kube-system helm-install-rke2-metrics-server-hklqz 0/1 Completed 0 2m2s
kube-system helm-install-rke2-runtimeclasses-57qwm 0/1 Completed 0 2m2s
kube-system kube-apiserver-k8s-node1 1/1 Running 0 4m27s
kube-system kube-controller-manager-k8s-node1 1/1 Running 0 3m49s
kube-system kube-proxy-k8s-node1 1/1 Running 0 4m27s
kube-system kube-scheduler-k8s-node1 1/1 Running 0 3m49s
kube-system rke2-canal-vwnxr 1/2 CrashLoopBackOff 3 (39s ago) 112s
kube-system rke2-coredns-rke2-coredns-559595db99-sgwf7 1/1 Running 0 114s
kube-system rke2-metrics-server-fdcdf575d-6tb9x 1/1 Running 0 83s
```
rke2디렉터리 : addon helm chart,인증서
[root@k8s-node1 ~]# tree /var/lib/rancher/rke2 -L 1 /var/lib/rancher/rke2 ├── agent ├── bin -> /var/lib/rancher/rke2/data/v1.33.8-rke2r1-1b2872361ec5/bin ├── data └── server 4 directories, 0 files [root@k8s-node1 ~]# tree /var/lib/rancher/rke2/server/ -L 1 /var/lib/rancher/rke2/server/ ├── agent-token -> /var/lib/rancher/rke2/server/token ├── cred ├── db ├── etc ├── manifests ├── node-token -> /var/lib/rancher/rke2/server/token ├── tls └── token 5 directories, 3 files [root@k8s-node1 ~]# cat /var/lib/rancher/rke2/server/token K106e21a8fb999718d131eb1dce5e2a55218ca993338cdff4de2c72137794588cb2::server:56d69e65a53934b4010ecd46aafe8722 [root@k8s-node1 ~]# kubectl get crd | grep -E 'helm|addon' addons.k3s.cattle.io 2026-02-18T05:20:08Z helmchartconfigs.helm.cattle.io 2026-02-18T05:20:08Z helmcharts.helm.cattle.io 2026-02-18T05:20:08Z [root@k8s-node1 ~]# cat /var/lib/rancher/rke2/agent/etc/crictl.yaml runtime-endpoint: unix:///run/k3s/containerd/containerd.sock [root@k8s-node1 ~]# crictl images IMAGE TAG IMAGE ID SIZE docker.io/rancher/hardened-calico v3.31.3-build20260206 952c7453d7729 233MB docker.io/rancher/hardened-coredns v1.14.1-build20260206 ad73f5d41fead 30.3MB docker.io/rancher/hardened-etcd v3.5.26-k3s1-build20260126 4b6a4b1e13f3e 18.7MB docker.io/rancher/hardened-flannel v0.28.1-build20260206 170c73d2937d4 23MB docker.io/rancher/hardened-k8s-metrics-server v0.8.1-build20260206 7a4190a831781 21.4MB docker.io/rancher/hardened-kubernetes v1.33.8-rke2r1-build20260210 ceea5f6055309 201MB docker.io/rancher/klipper-helm v0.9.14-build20260210 2e2c8cbfb79b8 64.1MB docker.io/rancher/mirrored-pause 3.6 6270bb605e12e 301kB docker.io/rancher/rke2-runtime v1.33.8-rke2r1 b1d89cfc44e65 100MB #보안설정 확인 [root@k8s-node1 ~]# cat /var/lib/rancher/rke2/agent/etc/kubelet.conf.d/00-rke2-defaults.conf address: 192.168.10.11 apiVersion: kubelet.config.k8s.io/v1beta1 authentication: anonymous: enabled: false webhook: cacheTTL: 2m0s enabled: true x509: clientCAFile: /var/lib/rancher/rke2/agent/client-ca.crt authorization: mode: Webhook webhook: cacheAuthorizedTTL: 5m0s cacheUnauthorizedTTL: 30s cgroupDriver: systemd clusterDNS: - 10.43.0.10 clusterDomain: cluster.local containerRuntimeEndpoint: unix:///run/k3s/containerd/containerd.sock cpuManagerReconcilePeriod: 10s crashLoopBackOff: {} ***failSwapOn: false** # 원래 쿠버네티스는 스왑(Swap) 메모리가 켜져 있으면 실행되지 않지만, RKE2는 이를 허용하도록 설정되어 있습니다. **nodeStatusUpdateFrequency: 10s** # 10초마다 "나 살아있어요"라고 컨트롤플레인 노드에 보고 **serializeImagePulls: false** # 이미지 병렬 pull 허용* ##중략###
노드관리
워커 노드 추가
토큰확인
[root@k8s-node1 ~]# cat /var/lib/rancher/rke2/server/node-token K106e21a8fb999718d131eb1dce5e2a55218ca993338cdff4de2c72137794588cb2::server:56d69e65a53934b4010ecd46aafe8722 [root@k8s-node1 ~]# ss -tnlp | grep 9345 LISTEN 0 4096 127.0.0.1:9345 0.0.0.0:* users:(("rke2",pid=6907,fd=7)) LISTEN 0 4096 192.168.10.11:9345 0.0.0.0:* users:(("rke2",pid=6907,fd=6)) LISTEN 0 4096 [::1]:9345 [::]:* users:(("rke2",pid=6907,fd=8))워커 노드 추가
[root@k8s-node2 ~]# curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE="agent" INSTALL_RKE2_CHANNEL=v1.33 sh - [INFO] using stable RPM repositories [INFO] using 1.33 series from channel stable Rancher RKE2 Common (v1.33) 2.1 kB/s | 659 B 00:00 Rancher RKE2 Common (v1.33) 8.0 kB/s | 2.4 kB 00:00 Importing GPG key 0xE257814A: Userid : "Rancher (CI) <ci@rancher.com>" Fingerprint: C8CF F216 4551 26E9 B9C9 18BE 925E A29A E257 814A From : https://rpm.rancher.io/public.key Rancher RKE2 Common (v1.33) 3.6 kB/s | 2.6 kB 00:00 Rancher RKE2 1.33 (v1.33) 2.3 kB/s | 659 B 00:00 Rancher RKE2 1.33 (v1.33) 19 kB/s | 2.4 kB 00:00 Importing GPG key 0xE257814A: Userid : "Rancher (CI) <ci@rancher.com>" Fingerprint: C8CF F216 4551 26E9 B9C9 18BE 925E A29A E257 814A From : https://rpm.rancher.io/public.key Rancher RKE2 1.33 (v1.33) 8.4 kB/s | 5.9 kB 00:00 Dependencies resolved. ========================================================================================================================================== Package Architecture Version Repository Size ========================================================================================================================================== Installing: rke2-agent x86_64 1.33.8~rke2r1-0.el9 rancher-rke2-1.33-stable 8.3 k Installing dependencies: rke2-common x86_64 1.33.8~rke2r1-0.el9 rancher-rke2-1.33-stable 27 M rke2-selinux noarch 0.22-1.el9 rancher-rke2-common-stable 22 k Transaction Summary ========================================================================================================================================== Install 3 Packages [root@k8s-node2 ~]# TOKEN=K106e21a8fb999718d131eb1dce5e2a55218ca993338cdff4de2c72137794588cb2::server:56d69e65a53934b4010ecd46aafe8722 [root@k8s-node2 ~]# mkdir -p /etc/rancher/rke2/ [root@k8s-node2 ~]# cat << EOF > /etc/rancher/rke2/config.yaml server: https://192.168.10.11:9345 token: $TOKEN EOF [root@k8s-node2 ~]# systemctl enable --now rke2-agent.service Created symlink /etc/systemd/system/multi-user.target.wants/rke2-agent.service → /usr/lib/systemd/system/rke2-agent.service.워커노드 정보 확인
[k8s-node1] [root@k8s-node1 ~]# kubectl get node -owide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k8s-node1 Ready control-plane,etcd,master 19m v1.33.8+rke2r1 192.168.10.11 <none> Rocky Linux 9.6 (Blue Onyx) 5.14.0-570.52.1.el9_6.x86_64 containerd://2.1.5-k3s1 k8s-node2 Ready <none> 7m17s v1.33.8+rke2r1 192.168.10.12 <none> Rocky Linux 9.6 (Blue Onyx) 5.14.0-570.52.1.el9_6.x86_64 containerd://2.1.5-k3s1 [root@k8s-node1 ~]# kubectl get pod -n kube-system -owide | grep k8s-node2 kube-proxy-k8s-node2 1/1 Running 0 7m48s 192.168.10.12 k8s-node2 <none> <none> rke2-canal-wr4d8 1/2 CrashLoopBackOff 6 (60s ago) 7m48s 192.168.10.12 k8s-node2 <none> <none> [k8s-node2] [root@k8s-node2 agent]# tree /var/lib/rancher/rke2 -L 1 /var/lib/rancher/rke2 ├── agent ├── bin -> /var/lib/rancher/rke2/data/v1.33.8-rke2r1-1b2872361ec5/bin ├── data └── server 4 directories, 0 files [root@k8s-node2 agent]# systemctl status rke2-agent.service --no-pager ● rke2-agent.service - Rancher Kubernetes Engine v2 (agent) Loaded: loaded (/usr/lib/systemd/system/rke2-agent.service; enabled; preset: disabled) Active: active (running) since Wed 2026-02-18 14:31:50 KST; 8min ago Docs: https://github.com/rancher/rke2#readme Process: 6711 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS) Process: 6712 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS) Main PID: 6713 (rke2) Tasks: 61 Memory: 2.3G CPU: 53.113s [root@k8s-node2 agent]# ln -s /var/lib/rancher/rke2/bin/containerd /usr/local/bin/containerd ln -s /var/lib/rancher/rke2/bin/crictl /usr/local/bin/crictl ln -s /var/lib/rancher/rke2/agent/etc/crictl.yaml /etc/crictl.yaml
워커 노드 삭제 → 워커노드 재추가
워커노드 삭제를 위해 drain 진행
[root@k8s-node1 ~]# kubectl drain k8s-node2 --ignore-daemonsets --delete-emptydir-data node/k8s-node2 cordoned Warning: ignoring DaemonSet-managed Pods: kube-system/rke2-canal-wr4d8 node/k8s-node2 drained [root@k8s-node1 ~]# kubectl delete node k8s-node2 node "k8s-node2" deleted워커 노드 삭제
[root@k8s-node2 agent]# systemctl stop rke2-agent [root@k8s-node2 agent]# ls -l /usr/bin/rke2* -rwxr-xr-x. 1 root root 124432768 Feb 14 04:11 /usr/bin/rke2 -rwxr-xr-x. 1 root root 3373 Feb 18 02:48 /usr/bin/rke2-killall.sh -rwxr-xr-x. 1 root root 5606 Feb 18 02:48 /usr/bin/rke2-uninstall.sh [root@k8s-node2 agent]# cat /usr/bin/rke2-uninstall.sh #!/bin/sh set -ex # helper function for timestamped logging log() { echo "[$(date +'%Y-%m-%d %H:%M:%S')] $*" } # helper function for logging error and exiting with a message error() { log "ERROR: $*" >&2 exit 1 } [root@k8s-node2 agent]# tree /etc/rancher /etc/rancher [error opening dir] 0 directories, 0 files [root@k8s-node2 agent]# tree /var/lib/rancher /var/lib/rancher [error opening dir]워커노드 재추가
[root@k8s-node2 agent]# curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE="agent" INSTALL_RKE2_CHANNEL=v1.33 sh - shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory [INFO] using stable RPM repositories [INFO] using 1.33 series from channel stable Rancher RKE2 Common (v1.33) 2.3 kB/s | 659 B 00:00 Rancher RKE2 1.33 (v1.33) 1.5 kB/s | 659 B 00:00 Dependencies resolved. ========================================================================================================================================== Package Architecture Version Repository Size ========================================================================================================================================== Installing: rke2-agent x86_64 1.33.8~rke2r1-0.el9 rancher-rke2-1.33-stable 8.3 k Installing dependencies: rke2-common x86_64 1.33.8~rke2r1-0.el9 rancher-rke2-1.33-stable 27 M rke2-selinux noarch 0.22-1.el9 rancher-rke2-common-stable 22 k Transaction Summary ========================================================================================================================================== Install 3 Packages [root@k8s-node2 agent]# TOKEN=K106e21a8fb999718d131eb1dce5e2a55218ca993338cdff4de2c72137794588cb2::server:56d69e65a53934b4010ecd46aafe8722 [root@k8s-node2 agent]# echo $TOKEN K106e21a8fb999718d131eb1dce5e2a55218ca993338cdff4de2c72137794588cb2::server:56d69e65a53934b4010ecd46aafe8722 [root@k8s-node2 agent]# cat << EOF > /etc/rancher/rke2/config.yaml server: https://192.168.10.11:9345 token: $TOKEN EOF [root@k8s-node2 agent]# cat /etc/rancher/rke2/config.yaml server: https://192.168.10.11:9345 token: K106e21a8fb999718d131eb1dce5e2a55218ca993338cdff4de2c72137794588cb2::server:56d69e65a53934b4010ecd46aafe8722 [root@k8s-node2 agent]# systemctl enable --now rke2-agent.service Created symlink /etc/systemd/system/multi-user.target.wants/rke2-agent.service → /usr/lib/systemd/system/rke2-agent.service. [root@k8s-node2 agent]# systemctl enable --now rke2-agent.service Created symlink /etc/systemd/system/multi-user.target.wants/rke2-agent.service → /usr/lib/systemd/system/rke2-agent.service. [root@k8s-node2 agent]# journalctl -u rke2-agent -f
샘플앱 배포진행
[root@k8s-node1 ~]# kubectl get deploy,pod,svc,ep -owide Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deployment.apps/webpod 0/2 2 0 3s webpod traefik/whoami app=webpod NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/webpod-697b545f57-4nnh7 0/1 ContainerCreating 0 3s <none> k8s-node2 <none> <none> pod/webpod-697b545f57-tftg7 0/1 Pending 0 3s <none> k8s-node1 <none> <none> NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 27m <none> service/webpod NodePort 10.43.182.94 <none> 80:30000/TCP 3s app=webpod NAME ENDPOINTS AGE endpoints/kubernetes 192.168.10.11:6443 27m endpoints/webpod <none> 3s
업그레이드
인증서 관리 및 수동갱신
인증서 관리 요약
클라이언트/서버 인증서 유효기간: 발급일 기준 365일
자동 갱신 조건:
인증서가 만료되었거나
만료 120일 이내로 남았을 경우
→ RKE2 재시작 시 자동 갱신
자동 갱신 방식: 기존 키를 재사용하여 유효기간만 연장
새 키 + 새 인증서로 교체하려면:
rotate명령어로 수동 교체 필요만료 120일 이내 진입 시: Kubernetes 이벤트
CertificateExpirationWarning발생
노드인증서와 만료일 확인
[[k8s-node1] [root@k8s-node1 ~]# rke2 certificate check --output table INFO[0000] Server detected, checking agent and server certificates FILENAME SUBJECT USAGES EXPIRES RESIDUAL TIME STATUS
client-controller.crt system:kube-controller-manager ClientAuth Feb 18, 2027 05:17 UTC 1 year OK
client-controller.crt rke2-client-ca@1771391877 CertSign Feb 16, 2036 05:17 UTC 10 years OK
kube-controller-manager.crt kube-controller-manager ServerAuth Feb 18, 2027 05:17 UTC 1 year OK
kube-controller-manager.crt rke2-server-ca@1771391877 CertSign Feb 16, 2036 05:17 UTC 10 years OK
client-scheduler.crt system:kube-scheduler ClientAuth Feb 18, 2027 05:17 UTC 1 year OK[[k8s-node2]
[root@k8s-node2 agent]# rke2 certificate check --output table
INFO[0000] Server detected, checking agent and server certificatesFILENAME SUBJECT USAGES EXPIRES RESIDUAL TIME STATUS
client-kube-proxy.crt system:kube-proxy ClientAuth Feb 18, 2027 05:46 UTC 1 year OK
client-kube-proxy.crt rke2-client-ca@1771391877 CertSign Feb 16, 2036 05:17 UTC 10 years OK
client-kubelet.crt system:node:k8s-node2 ClientAuth Feb 18, 2027 05:46 UTC 1 year OK
client-kubelet.crt rke2-client-ca@1771391877 CertSign Feb 16, 2036 05:17 UTC 10 years OK
serving-kubelet.crt k8s-node2 ServerAuth Feb 18, 2027 05:46 UTC 1 year OK
serving-kubelet.crt rke2-server-ca@1771391877 CertSign Feb 16, 2036 05:17 UTC 10 years OK
client-rke2-controller.crt system:rke2-controller ClientAuth Feb 18, 2027 05:46 UTC 1 year OK
client-rke2-controller.crt rke2-client-ca@1771391877 CertSign Feb 16, 2036 05:17 UTC 10 years OK- 인증서 수동 교체 : rke2 certificate rotate 명령 사용. ```bash [root@k8s-node1 ~]# systemctl stop rke2-server [root@k8s-node1 ~]# rke2 certificate rotate INFO[0000] Server detected, rotating agent and server certificates INFO[0000] Rotating dynamic listener certificate INFO[0000] Rotating certificates for rke2-controller INFO[0000] Rotating certificates for api-server INFO[0000] Rotating certificates for admin INFO[0000] Rotating certificates for auth-proxy INFO[0000] Rotating certificates for cloud-controller INFO[0000] Rotating certificates for etcd INFO[0000] Rotating certificates for scheduler INFO[0000] Rotating certificates for supervisor INFO[0000] Rotating certificates for kube-proxy INFO[0000] Rotating certificates for controller-manager INFO[0000] Rotating certificates for kubelet INFO[0000] Successfully backed up certificates to /var/lib/rancher/rke2/server/tls-1771393856, please restart rke2 server or agent to rotate certificates [root@k8s-node1 ~]# systemctl start rke2-server [root@k8s-node1 ~]# rke2 certificate check --output table INFO[0000] Server detected, checking agent and server certificates FILENAME SUBJECT USAGES EXPIRES RESIDUAL TIME STATUS -------- ------- ------ ------- ------------- ------ client-auth-proxy.crt system:auth-proxy ClientAuth Feb 18, 2027 05:52 UTC 1 year OK client-auth-proxy.crt rke2-request-header-ca@1771391877 CertSign Feb 16, 2036 05:17 UTC 10 years OK client-rke2-cloud-controller.crt rke2-cloud-controller-manager ClientAuth Feb 18, 2027 05:52 UTC 1 year OK client-rke2-cloud-controller.crt rke2-client-ca@1771391877 CertSign Feb 16, 2036 05:17 UTC 10 years OK [root@k8s-node1 ~]# diff /etc/rancher/rke2/rke2.yaml ~/.kube/config 18,19c18,19 < client-certificate-data: LS0tLS1CRUdJTiBDRVJUS //중략 > client-certificate-data: LS0tLS1CRUdJTiBDRVJUS //중 ```업그레이드 소개
업그레이드 정책
Kubernetes Version Skew Policy 적용
RKE2 업그레이드 시 Kubernetes 버전 편차 정책을 따름
마이너 버전을 건너뛰는 업그레이드는 불가
예시
v1.26 → v1.28 직접 업그레이드 불가
v1.26 → v1.27 → v1.28 순차 업그레이드 필요
Release Channels
설치 스크립트 또는 자동 업그레이드 기능 사용 시, 릴리스 채널 기준으로 버전이 선택됨.
stable (기본값)
운영 환경 권장
커뮤니티 검증 완료
Rancher 최신 릴리스와 호환
latest
최신 기능 테스트 목적
커뮤니티 검증이 충분하지 않을 수 있음
Rancher와 호환되지 않을 가능성 존재
특정 Kubernetes 마이너 버전 채널 (예: v1.34)
해당 마이너 버전 계열의 최신 패치 버전 제공
반드시 안정 릴리스만 의미하지는 않음
설치 스크립트 주요 변수
INSTALL_RKE2_VERSION
설치할 RKE2 버전 지정
기본값: stable
INSTALL_RKE2_TYPE
생성할 systemd 서비스 유형 지정
server (기본값)
agent
INSTALL_RKE2_CHANNEL_URL
릴리스 채널 정보를 가져올 URL
기본값:
INSTALL_RKE2_CHANNEL
- 사용할 릴리스 채널 지정
- 기본값: stable
- 선택 가능: latest, testing
INSTALL_RKE2_METHOD
- 설치 방식 지정
- rpm: RPM 기반 시스템 기본값
- tar: 그 외 시스템 기본값수동업그레이드
수동업그레이드 절차 : 컨트롤노드를 하나씩 업그레이드 → 워커노드 업그레이드
[rke2-server] 컨트롤 플레인 노드 업그레이드
[root@k8s-node1 ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-node1 Ready control-plane,etcd,master 41m v1.33.8+rke2r1 k8s-node2 Ready <none> 13m v1.33.8+rke2r1 [root@k8s-node1 ~]# rke2 --version rke2 version v1.33.8+rke2r1 (eb75e3c1774cee5a584259d6fee77eb8cfa9b430) go version go1.24.12 X:boringcrypto [root@k8s-node1 ~]# curl -s https://update.rke2.io/v1-release/channels | jq .data { "id": "v1.34", "type": "channel", "links": { "self": "https://update.rke2.io/v1-release/channels/v1.34" }, "name": "v1.34", "latest": "v1.34.4+rke2r1", "latestRegexp": "v1\\.34\\..*", "excludeRegexp": "^[^+]+-" } [root@k8s-node1 ~]# curl -sfL https://get.rke2.io | INSTALL_RKE2_CHANNEL=v1.34 sh - [INFO] using stable RPM repositories [INFO] using 1.34 series from channel stable Importing GPG key 0xE257814A: Userid : "Rancher (CI) <ci@rancher.com>" Fingerprint: C8CF F216 4551 26E9 B9C9 18BE 925E A29A E257 814A From : https://rpm.rancher.io/public.key Error: Failed to download metadata for repo 'rancher-rke2-1.34-stable': repomd.xml GPG signature verification error: Bad GPG signature [root@k8s-node1 ~]# rke2 --version rke2 version v1.34.4+rke2r1 (c6b97dc03cefec17e8454a6f45b29f4e3d0a81d6) go version go1.24.12 X:boringcrypto [root@k8s-node1 ~]# kubectl get pod -n kube-system --sort-by=.metadata.creationTimestamp | tac helm-install-rke2-canal-db69m 0/1 Completed 0 26s helm-install-rke2-coredns-6bgr5 0/1 Completed 0 26s helm-install-rke2-metrics-server-km25q 1/1 Running 0 26s helm-install-rke2-runtimeclasses-qcq8j 1/1 Running 0 26s kube-proxy-k8s-node1 1/1 Running 0 42s kube-controller-manager-k8s-node1 1/1 Running 0 44s kube-scheduler-k8s-node1 1/1 Running 0 45s etcd-k8s-node1 1/1 Running 0 49s kube-apiserver-k8s-node1 1/1 Running 0 79s rke2-canal-jfzmt 1/2 CrashLoopBackOff 8 (23s ago) 17m kube-proxy-k8s-node2 1/1 Running 0 17m rke2-metrics-server-fdcdf575d-6tb9x 1/1 Running 0 41m rke2-canal-vwnxr 1/2 CrashLoopBackOff 17 (19s ago) 42m rke2-coredns-rke2-coredns-559595db99-sgwf7 1/1 Running 0 42m [root@k8s-node1 ~]# kubectl get node -owide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k8s-node1 Ready control-plane,etcd,master 45m v1.34.4+rke2r1 192.168.10.11 <none> Rocky Linux 9.6 (Blue Onyx) 5.14.0-570.52.1.el9_6.x86_64 containerd://2.1.5-k3s1 k8s-node2 Ready <none> 18m v1.33.8+rke2r1 192.168.10.12 <none> Rocky Linux 9.6 (Blue Onyx) 5.14.0-570.52.1.el9_6.x86_64 containerd://2.1.5-k3s1[rke2 agent] 워커 노드 업그레이드
[root@k8s-node2 agent]# rke2 --version rke2 version v1.33.8+rke2r1 (eb75e3c1774cee5a584259d6fee77eb8cfa9b430) go version go1.24.12 X:boringcrypto [root@k8s-node2 agent]# curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE=agent INSTALL_RKE2_CHANNEL=v1.34 sh - shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory [INFO] using stable RPM repositories [INFO] using 1.34 series from channel stable Importing GPG key 0xE257814A: Userid : "Rancher (CI) <ci@rancher.com>" Fingerprint: C8CF F216 4551 26E9 B9C9 18BE 925E A29A E257 814A From : https://rpm.rancher.io/public.key Error: Failed to download metadata for repo 'rancher-rke2-1.34-stable': repomd.xml GPG signature verification error: Bad GPG signature Importing GPG key 0xE257814A: [root@k8s-node2 agent]# rke2 --version rke2 version v1.34.4+rke2r1 (c6b97dc03cefec17e8454a6f45b29f4e3d0a81d6) go version go1.24.12 X:boringcrypto [root@k8s-node2 agent]# dnf repolist repo id repo name appstream Rocky Linux 9 - AppStream baseos Rocky Linux 9 - BaseOS extras Rocky Linux 9 - Extras rancher-rke2-1.34-stable Rancher RKE2 1.34 (v1.34) rancher-rke2-common-stable Rancher RKE2 Common (v1.34) [root@k8s-node2 agent]# systemctl restart rke2-agent서버노드 확인
[root@k8s-node1 ~]# kubectl get node -owide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k8s-node1 Ready control-plane,etcd,master 47m v1.34.4+rke2r1 192.168.10.11 <none> Rocky Linux 9.6 (Blue Onyx) 5.14.0-570.52.1.el9_6.x86_64 containerd://2.1.5-k3s1 k8s-node2 Ready <none> 20m v1.34.4+rke2r1 192.168.10.12 <none> Rocky Linux 9.6 (Blue Onyx) 5.14.0-570.52.1.el9_6.x86_64 containerd://2.1.5-k3s1 [root@k8s-node1 ~]# kubectl get pod -n kube-system --sort-by=.metadata.creationTimestamp | tac kube-proxy-k8s-node2 1/1 Running 0 33s kube-proxy-k8s-node1 1/1 Running 0 2m38s helm-install-rke2-canal-db69m 0/1 Completed 0 3m45s helm-install-rke2-coredns-6bgr5 0/1 Completed 0 3m45s helm-install-rke2-metrics-server-km25q 1/1 Running 2 (97s ago) 3m45s helm-install-rke2-runtimeclasses-qcq8j 1/1 Running 2 (97s ago) 3m45s kube-controller-manager-k8s-node1 1/1 Running 0 4m3s kube-scheduler-k8s-node1 1/1 Running 0 4m4s etcd-k8s-node1 1/1 Running 0 4m8s kube-apiserver-k8s-node1 1/1 Running 0 4m38s rke2-canal-jfzmt 1/2 Error 11 (31s ago) 20m rke2-metrics-server-fdcdf575d-6tb9x 1/1 Running 0 44m rke2-canal-vwnxr 1/2 CrashLoopBackOff 22 (76s ago) 45m rke2-coredns-rke2-coredns-559595db99-sgwf7 1/1 Running 0 45m NAME READY STATUS RESTARTS AGE
자동업그레이드 소개
- Rancher의 System Upgrade Controller를 사용하여 RKE2 클러스터 업그레이드를 관리할 수 있음
- Kubernetes 네이티브 방식으로 동작하는 업그레이드 구조임
- Plan 사용자 정의 리소스(CRD)를 통해 선언적으로 업그레이드를 정의함
- Plan에서 다음 항목을 지정할 수 있음
- 업그레이드할 노드
- 적용할 RKE2 버전
- 업그레이드 정책 및 요구사항
- Label Selector를 사용하여 업그레이드 대상 노드를 지정함
- 특정 역할(control-plane, worker) 기준 선택 가능
- 특정 환경(dev, prod) 기준 선택 가능
- 특정 노드 그룹 기준 선택 가능
- 컨트롤러는 Plan 리소스를 지속적으로 모니터링함
- 조건에 맞는 노드를 자동으로 선택함
- 선택된 노드에 Kubernetes Job을 생성하여 업그레이드를 수행함
- Job 기반으로 업그레이드 일정과 실행을 관리함
- 업그레이드 Job이 성공적으로 완료되면
- 해당 노드에 완료 상태를 나타내는 레이블을 자동으로 추가함
- 이를 통해 중복 업그레이드를 방지하고 상태 추적이 가능함
자동업그레이드 수행(v1.34→v1.35)
system-upgrade-controller 설치
[root@k8s-node1 ~]# kubectl apply -f https://github.com/rancher/system-upgrade-controller/releases/latest/download/crd.yaml -f https://github.com/rancher/system-upgrade-controller/releases/latest/download/system-upgrade-controller.yaml customresourcedefinition.apiextensions.k8s.io/plans.upgrade.cattle.io created namespace/system-upgrade created serviceaccount/system-upgrade created role.rbac.authorization.k8s.io/system-upgrade-controller created clusterrole.rbac.authorization.k8s.io/system-upgrade-controller created clusterrole.rbac.authorization.k8s.io/system-upgrade-controller-drainer created rolebinding.rbac.authorization.k8s.io/system-upgrade created clusterrolebinding.rbac.authorization.k8s.io/system-upgrade created clusterrolebinding.rbac.authorization.k8s.io/system-upgrade-drainer created configmap/default-controller-env created deployment.apps/system-upgrade-controller created [root@k8s-node1 ~]# kubectl get deploy,pod,cm -n system-upgrade \NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/system-upgrade-controller 1/1 1 1 21s NAME READY STATUS RESTARTS AGE pod/system-upgrade-controller-6f9f9b8cf4-lwxwr 1/1 Running 0 21s NAME DATA AGE configmap/default-controller-env 10 21s configmap/kube-root-ca.crt 1 22s [root@k8s-node1 ~]# kubectl get crd | grep upgrade plans.upgrade.cattle.io 2026-02-18T06:12:32Z [root@k8s-node1 ~]# kubectl logs -n system-upgrade -l app.kubernetes.io/name=system-upgrade-controller -f W0218 06:12:38.134112 1 client_config.go:667] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. time="2026-02-18T06:12:38Z" level=info msg="Updating embedded CRD plans.upgrade.cattle.io" I0218 06:12:39.607690 1 leaderelection.go:257] attempting to acquire leader lease system-upgrade/system-upgrade-controller... I0218 06:12:39.607884 1 event.go:389] "Event occurred" object="k8s-node1" fieldPath="" kind="Node" apiVersion="" type="Normal" reason="Starting" message="system-upgrade-controller v0.18.0 (8271c14) starting leader election for system-upgrade/system-upgrade-controller" I0218 06:12:39.625094 1 leaderelection.go:271] successfully acquired lease system-upgrade/system-upgrade-controller time="2026-02-18T06:12:39Z" level=info msg="Starting /v1, Kind=Node controller" time="2026-02-18T06:12:39Z" level=info msg="Starting /v1, Kind=Secret controller" time="2026-02-18T06:12:39Z" level=info msg="Starting batch/v1, Kind=Job controller" time="2026-02-18T06:12:39Z" level=info msg="Starting upgrade.cattle.io/v1, Kind=Plan controller" I0218 06:12:39.743665 1 event.go:389] "Event occurred" object="k8s-node1" fieldPath="" kind="Node" apiVersion="" type="Normal" reason="Started" message="system-upgrade-controller v0.18.0 (8271c14) running as system-upgrade/system-upgrade-controller"계획 작성 후 실행 및 확인
#계획작성시에, latest정상구동안되어 버젼직접지정하여 진행 serviceAccountName: system-upgrade upgrade: image: rancher/rke2-upgrade version: **v1.35.1+rke2r1**
[root@k8s-node1 ~]# kubectl get node -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-node1 Ready control-plane,etcd,master 54m v1.34.4+rke2r1 192.168.10.11 <none> Rocky Linux 9.6 (Blue Onyx) 5.14.0-570.52.1.el9_6.x86_64 containerd://2.1.5-k3s1
k8s-node2 Ready <none> 27m v1.34.4+rke2r1 192.168.10.12 <none> Rocky Linux 9.6 (Blue Onyx) 5.14.0-570.52.1.el9_6.x86_64 containerd://2.1.5-k3s1
[root@k8s-node1 ~]# kubectl get node -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-node1 Ready control-plane,etcd,master 70m v1.35.1+rke2r1 192.168.10.11 <none> Rocky Linux 9.6 (Blue Onyx) 5.14.0-570.52.1.el9_6.x86_64 containerd://2.1.5-k3s1
k8s-node2 Ready <none> 43m v1.35.1+rke2r1 192.168.10.12 <none> Rocky Linux 9.6 (Blue Onyx) 5.14.0-570.52.1.el9_6.x86_64 containerd://2.1.5-k3s1
```Cluster API 실습
kind k8s에 관리용 클러스터 설치
(⎈|kind-myk8s:default) zosys@4:~$ mkdir capi-docker (⎈|kind-myk8s:default) zosys@4:~$ cd capi-docker/ (⎈|kind-myk8s:default) zosys@4:~/capi-docker$ (⎈|N/A:N/A) zosys@4:~/capi-docker$ kind create cluster --name myk8s --image kindest/node:v1.35.0 --config - <<EOF kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: - role: control-plane extraMounts: - hostPath: /Users/gasida/.orbstack/run/docker.sock # 대분분 이 값 사용: /var/run/docker.sock containerPath: /var/run/docker.sock extraPortMappings: - containerPort: 30000 # sample hostPort: 30000 - containerPort: 30001 # kube-ops-view hostPort: 30001 EOF Creating cluster "myk8s" ... (⎈|kind-myk8s:N/A) zosys@4:~/capi-docker$ curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.12.2/clusterctl-linux-amd64 -o clusterctl % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 68.2M 100 68.2M 0 0 7307k 0 0:00:09 0:00:09 --:--:-- 10.5M (⎈|kind-myk8s:N/A) zosys@4:~/capi-docker$ sudo install -o root -g root -m 0755 clusterctl /usr/local/bin/clusterctl [sudo] password for zosys: (⎈|kind-myk8s:N/A) zosys@4:~/capi-docker$ clusterctl version -o json | jq { "clusterctl": { "major": "1", "minor": "12", "gitVersion": "v1.12.2", "gitCommit": "9ee80d1ff529c48ae0b8e022ec01d70ac496e8e5", "gitTreeState": "clean", "buildDate": "2026-01-20T16:48:19Z", "goVersion": "go1.24.12", "compiler": "gc", "platform": "linux/amd64" } } (⎈|kind-myk8s:N/A) zosys@4:~/capi-docker$ kubectl get crd | grep x-k8s clusterclasses.cluster.x-k8s.io 2026-02-18T06:49:12Z clusterresourcesetbindings.addons.cluster.x-k8s.io 2026-02-18T06:49:12Z clusterresourcesets.addons.cluster.x-k8s.io 2026-02-18T06:49:12Z clusters.cluster.x-k8s.io 2026-02-18T06:49:12Z devclusters.infrastructure.cluster.x-k8s.io 2026-02-18T06:49:14Z관리용 매니지먼트 K8S 클러스터 정보 확인
(⎈|kind-myk8s:N/A) zosys@4:~/capi-docker$ kubectl describe -n capi-system deployment.apps/capi-controller-manager | grep feature-gates --feature-gates=MachinePool=true,ClusterTopology=true,RuntimeSDK=false,MachineSetPreflightChecks=true,MachineWaitForVolumeDetachConsiderVolumeAttachments=true,PriorityQueue=false,ReconcilerRateLimiting=false,InPlaceUpdates=false,MachineTaintPropagation=false (⎈|kind-myk8s:N/A) zosys@4:~/capi-docker$ kubectl get providers.clusterctl.cluster.x-k8s.io -A NAMESPACE NAME AGE TYPE PROVIDER VERSION capd-system infrastructure-docker 2m17s InfrastructureProvider docker v1.12.3 capi-kubeadm-bootstrap-system bootstrap-kubeadm 2m18s BootstrapProvider kubeadm v1.12.3 capi-kubeadm-control-plane-system control-plane-kubeadm 2m18s ControlPlaneProvider kubeadm v1.12.3 capi-system cluster-api 2m19s CoreProvider cluster-api v1.12.3 (⎈|kind-myk8s:N/A) zosys@4:~/capi-docker$ kubectl get providers -n capi-system cluster-api -o yaml apiVersion: clusterctl.cluster.x-k8s.io/v1alpha3 kind: Provider metadata: creationTimestamp: "2026-02-18T06:49:13Z" generation: 1 labels: cluster.x-k8s.io/provider: cluster-api clusterctl.cluster.x-k8s.io: "" clusterctl.cluster.x-k8s.io/core: inventory name: cluster-api namespace: capi-system resourceVersion: "1037" uid: 1a5dacb5-8700-46e3-a5d7-344a17bcdc9a providerName: cluster-api type: CoreProvider version: v1.12.3cert-manager 확인
(⎈|kind-myk8s:N/A) zosys@4:~/capi-docker$ kubectl get crd | grep cert certificaterequests.cert-manager.io 2026-02-18T06:48:49Z certificates.cert-manager.io 2026-02-18T06:48:49Z challenges.acme.cert-manager.io 2026-02-18T06:48:49Z clusterissuers.cert-manager.io 2026-02-18T06:48:49Z issuers.cert-manager.io 2026-02-18T06:48:49Z orders.acme.cert-manager.io 2026-02-18T06:48:49Z (⎈|kind-myk8s:N/A) zosys@4:~/capi-docker$ kubectl get deploy,pod,svc,ep,cm,secret,sa -n cert-manager Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/cert-manager 1/1 1 1 3m36s deployment.apps/cert-manager-cainjector 1/1 1 1 3m36s deployment.apps/cert-manager-webhook 1/1 1 1 3m36s NAME READY STATUS RESTARTS AGE pod/cert-manager-598d877b78-d2m7f 1/1 Running 0 3m36s pod/cert-manager-cainjector-6b5777d564-pxjt2 1/1 Running 0 3m36s pod/cert-manager-webhook-5d9fc6b4ff-q2tzg 1/1 Running 0 3m36s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/cert-manager ClusterIP 10.96.105.201 <none> 9402/TCP 3m37s service/cert-manager-cainjector ClusterIP 10.96.22.45 <none> 9402/TCP 3m37s service/cert-manager-webhook ClusterIP 10.96.245.210 <none> 443/TCP,9402/TCP 3m37s NAME ENDPOINTS AGE endpoints/cert-manager 10.244.0.7:9402 3m37s endpoints/cert-manager-cainjector 10.244.0.6:9402 3m37s endpoints/cert-manager-webhook 10.244.0.8:10250,10.244.0.8:9402 3m37s NAME DATA AGE configmap/kube-root-ca.crt 1 3m38s NAME TYPE DATA AGE secret/cert-manager-webhook-ca Opaque 3 3m22s NAME AGE serviceaccount/cert-manager 3m38s serviceaccount/cert-manager-cainjector 3m38s serviceaccount/cert-manager-webhook 3m38s serviceaccount/default 3m38s (⎈|kind-myk8s:N/A) zosys@4:~/capi-docker$ kubectl get issuers.cert-manager.io -A NAMESPACE NAME READY AGE capd-system capd-selfsigned-issuer True 3m17s capi-kubeadm-bootstrap-system capi-kubeadm-bootstrap-selfsigned-issuer True 3m18s capi-kubeadm-control-plane-system capi-kubeadm-control-plane-selfsigned-issuer True 3m17s capi-system capi-selfsigned-issuer True 3m19s (⎈|kind-myk8s:N/A) zosys@4:~/capi-docker$ kubectl get certificaterequests.cert-manager.io -A -owide NAMESPACE NAME APPROVED DENIED READY ISSUER REQUESTER STATUS AGE capd-system capd-serving-cert-1 True True capd-selfsigned-issuer system:serviceaccount:cert-manager:cert-manager Certificate fetched from issuer successfully 3m19s capi-kubeadm-bootstrap-system capi-kubeadm-bootstrap-serving-cert-1 True True capi-kubeadm-bootstrap-selfsigned-issuer system:serviceaccount:cert-manager:cert-manager Certificate fetched from issuer successfully 3m21s capi-kubeadm-control-plane-system capi-kubeadm-control-plane-serving-cert-1 True True capi-kubeadm-control-plane-selfsigned-issuer system:serviceaccount:cert-manager:cert-manager Certificate fetched from issuer successfully 3m20s capi-system capi-serving-cert-1 True True capi-selfsigned-issuer system:serviceaccount:cert-manager:cert-manager Certificate fetched from issuer successfully 3m22s (⎈|kind-myk8s:N/A) zosys@4:~/capi-docker$ kubectl get certificates.cert-manager.io -A -owide NAMESPACE NAME READY SECRET ISSUER STATUS AGE capd-system capd-serving-cert True capd-webhook-service-cert capd-selfsigned-issuer Certificate is up to date and has not expired 3m24s capi-kubeadm-bootstrap-system capi-kubeadm-bootstrap-serving-cert True capi-kubeadm-bootstrap-webhook-service-cert capi-kubeadm-bootstrap-selfsigned-issuer Certificate is up to date and has not expired 3m25s capi-kubeadm-control-plane-system capi-kubeadm-control-plane-serving-cert True capi-kubeadm-control-plane-webhook-service-cert capi-kubeadm-control-plane-selfsigned-issuer Certificate is up to date and has not expired 3m24s capi-system capi-serving-cert True capi-webhook-service-cert capi-selfsigned-issuer Certificate is up to date and has not expired 3m26s워크로드 클러스터 생성
(⎈|kind-myk8s:N/A) zosys@4:~/capi-docker$ export SERVICE_CIDR=["10.20.0.0/16"] (⎈|kind-myk8s:N/A) zosys@4:~/capi-docker$ export POD_CIDR=["10.10.0.0/16"] (⎈|kind-myk8s:N/A) zosys@4:~/capi-docker$ export SERVICE_DOMAIN="myk8s-1.local" (⎈|kind-myk8s:N/A) zosys@4:~/capi-docker$ export POD_SECURITY_STANDARD_ENABLED="false" (⎈|kind-myk8s:N/A) zosys@4:~/capi-docker$ clusterctl generate cluster capi-quickstart --flavor development \ --kubernetes-version v1.34.3 \ --control-plane-machine-count=3 \ --worker-machine-count=3 \ > capi-quickstart.yaml New clusterctl version available: v1.12.2 -> v1.12.3 sigs.k8s.io/cluster-api (⎈|kind-myk8s:N/A) zosys@4:~/capi-docker$ kubectl apply -f capi-quickstart.yaml clusterclass.cluster.x-k8s.io/quick-start created dockerclustertemplate. infrastructure.cluster.x-k8s.io/quick-start-cluster created kubeadmcontrolplanetemplate.controlplane.cluster.x-k8s.io/quick-start-control-plane created dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-control-plane created dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-default-worker-machinetemplate created dockermachinepooltemplate.infrastructure.cluster.x-k8s.io/quick-start-default-worker-machinepooltemplate created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/quick-start-default-worker-bootstraptemplate created cluster.cluster.x-k8s.io/capi-quickstart created##생성 확인 & kubeconfig 자격 증명 & CNI 플러그인 설치 후 확인 (⎈|kind-myk8s:N/A) zosys@4:~/capi-docker$ clusterctl get kubeconfig capi-quickstart > capi-quickstart.kubeconfig (⎈|kind-myk8s:N/A) zosys@4:~/capi-docker$ kubectl --kubeconfig=capi-quickstart.kubeconfig apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.0/manifests/calico.yaml (⎈|kind-myk8s:N/A) zosys@4:~/capi-docker$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 01fa53373460 kindest/node:v1.35.0 "/usr/local/bin/entr…" 21 minutes ago Up 21 minutes 0.0.0.0:30000-30001->30000-30001/tcp, 127.0.0.1:38469->6443/tcp myk8s-control-plane 098bc8adc657 kindest/node:v1.35.0 "/usr/local/bin/entr…" 3 minutes ago Up 3 minutes capi-quickstart-md-0-67fkn-9ajzs-lcjv4 4f137h6yy154 kindest/node:v1.35.0 "/usr/local/bin/entr…" (⎈|kind-myk8s:N/A) zosys@4:~/capi-docker$ clusterctl describe cluster capi-quickstart NAME REPLICAS AVAILABLE READY UP TO DATE STATUS REASON SINCE MESSAGE Cluster/capi-quickstart 6/6 6 6 0 True Available워크로드 LB확인
(⎈|kind-myk8s:N/A) zosys@4:~$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a8f3c1d9b2e7 kindest/node:v1.34.3 "/usr/local/bin/entr…" About an hour ago Up About an hour 127.0.0.1:32775->6443/tcp capi-quickstart-j9fdm-6zg8v b7e2d4a1c9f3 kindest/node:v1.34.3 "/usr/local/bin/entr…" About an hour ago Up About an hour 127.0.0.1:32774->6443/tcp capi-quickstart-j9fdm-27w2s c3d9e7f1a2b6 kindest/node:v1.34.3 "/usr/local/bin/entr…" About an hour ago Up About an hour capi-quickstart-md-0-p7lv8-t7r9t-nhfpb d4a1b8c7e2f9 kindest/node:v1.34.3 "/usr/local/bin/entr…" About an hour ago Up About an hour capi-quickstart-md-0-p7lv8-t7r9t-rmcls e9f2a7b4c1d8 kindest/node:v1.34.3 "/usr/local/bin/entr…" About an hour ago Up About an hour capi-quickstart-md-0-p7lv8-t7r9t-t5ds2 f1a2b3c4d5e6 kindest/node:v1.34.3 "/usr/local/bin/entr…" About an hour ago Up About an hour 127.0.0.1:32773->6443/tcp capi-quickstart-j9fdm-ggm9z 9c8b7a6d5e4f kindest/haproxy:v20230606-42a2262b "haproxy -W -db -f /…" About an hour ago Up About an hour 0.0.0.0:32770->6443/tcp, 0.0.0.0:32771->8404/tcp capi-quickstart-lb 7e6d5c4b3a2f kindest/node:v1.35.0 "/usr/local/bin/entr…" 2 hours ago Up 2 hours 0.0.0.0:30000-30001->30000-30001/tcp, 127.0.0.1:54601->6443/tcp myk8s-control-plane (⎈|kind-myk8s:N/A) zosys@4:~$ docker inspect capi-quickstart-lb | jq ... "Entrypoint": [ "haproxy", "-W", "-db", "-f", "/usr/local/etc/haproxy/haproxy.cfg" (⎈|kind-myk8s:N/A) zosys@4:~$ curl -sk https://127.0.0.1:32770/version | jq { "major": "1", "minor": "34" } (⎈|kind-myk8s:N/A) zosys@4:~$ cat haproxy.cfg global log /dev/log local0 log /dev/log local1 notice daemon maxconn 100000 resolvers docker nameserver dns 127.0.0.11:53 defaults log global mode tcp option dontlognull timeout connect 5000 timeout client 50000 timeout server 50000 default-server init-addr none frontend stats mode http bind *:8404 stats enable stats uri /stats stats refresh 1s stats admin if TRUE frontend control-plane bind *:6443 default_backend kube-apiservers #############################################
'Study > K8S-Deploy' 카테고리의 다른 글
| K8S ) 6주차 과제 (1) | 2026.02.13 |
|---|---|
| K8S ) 5주차 과제 (0) | 2026.02.06 |
| K8S) 4주차 과제 (0) | 2026.01.31 |
| K8S)3주차 과제 (0) | 2026.01.24 |
| K8S)2주차 과제 (0) | 2026.01.15 |
















