본문 바로가기

study/KANS 3기

KANS 3기 2주차 두번째

Flannel CNI

K8S 버전은 1.30.4 : CNIflannel(v0.25.6)

 

쿠버네티스 네트워크 및 Flannel CNI 소개 - 링크

 

 

kind & Flannel 배포 - Docs Blog Github

 

GitHub - flannel-io/cni-plugin

Contribute to flannel-io/cni-plugin development by creating an account on GitHub.

github.com

#
cat <<EOF> kind-cni.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  labels:
    mynode: control-plane
  extraPortMappings:
  - containerPort: 30000
    hostPort: 30000
  - containerPort: 30001
    hostPort: 30001
  - containerPort: 30002
    hostPort: 30002
  kubeadmConfigPatches:
  - |
    kind: ClusterConfiguration
    controllerManager:
      extraArgs:
        bind-address: 0.0.0.0
    etcd:
      local:
        extraArgs:
          listen-metrics-urls: http://0.0.0.0:2381
    scheduler:
      extraArgs:
        bind-address: 0.0.0.0
  - |
    kind: KubeProxyConfiguration
    metricsBindAddress: 0.0.0.0
- role: worker
  labels:
    mynode: worker
- role: worker
  labels:
    mynode: worker2
networking:
  disableDefaultCNI: true
EOF
kind create cluster --config kind-cni.yaml --name myk8s --image kindest/node:v1.30.4

# 배포 확인
kind get clusters
kind get nodes --name myk8s
kubectl cluster-info

# 노드 확인 : CRI
kubectl get nodes -o wide

# 노드 라벨 확인
kubectl get nodes myk8s-control-plane -o jsonpath={.metadata.labels} | jq
...
"mynode": "control-plane",
...

kubectl get nodes myk8s-worker -o jsonpath={.metadata.labels} | jq
kubectl get nodes myk8s-worker2 -o jsonpath={.metadata.labels} | jq

# 컨테이너 확인 : 컨테이너 갯수, 컨테이너 이름 확인
docker ps
docker port myk8s-control-plane
docker port myk8s-worker
docker port myk8s-worker2

# 컨테이너 내부 정보 확인
docker exec -it myk8s-control-plane ip -br -c -4 addr
docker exec -it myk8s-worker  ip -br -c -4 addr
docker exec -it myk8s-worker2  ip -br -c -4 addr

#
docker exec -it myk8s-control-plane sh -c 'apt update && apt install tree jq psmisc lsof wget bridge-utils tcpdump iputils-ping htop git nano -y'
docker exec -it myk8s-worker  sh -c 'apt update && apt install tree jq psmisc lsof wget bridge-utils tcpdump iputils-ping -y'
docker exec -it myk8s-worker2 sh -c 'apt update && apt install tree jq psmisc lsof wget bridge-utils tcpdump iputils-ping -y'

 

 

 

 

bridge 실행파일 생성 후 로컬에 복사 , 혹은 팀 슬랙 채널에 업로드된 파일 다운로드 후 사용 할 것!

#
docker exec -it myk8s-control-plane bash
---------------------------------------
apt install golang -y
git clone https://github.com/containernetworking/plugins
chmod +x build_linux.sh

#
build_linux.sh
Building plugins
  bandwidth
  firewall
  portmap
  sbr
  tuning
  vrf
  bridge
  host-device
  ipvlan
  loopback
  macvlan
  ptp
  vlan
  dhcp
  host-local
  static

exit
---------------------------------------

# 자신의 PC에 복사
docker cp myk8s-control-plane:/opt/cni/bin/bridge .
ls -l bridge

 

 

#
watch -d kubectl get pod -A -owide

#
kubectl describe pod -n kube-system -l k8s-app=kube-dns
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  57s   default-scheduler  0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

# Flannel cni 설치
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

# namespace 에 pod-security.kubernetes.io/enforce=privileged Label 확인 
kubectl get ns --show-labels 
NAME                 STATUS   AGE     LABELS
kube-flannel         Active   2m49s   k8s-app=flannel,kubernetes.io/metadata.name=kube-flannel,pod-security.kubernetes.io/enforce=privileged

kubectl get ds,pod,cm -n kube-flannel

kubectl describe cm -n kube-flannel kube-flannel-cfg
cni-conf.json:
----
{
  "name": "cbr0",
  "cniVersion": "0.3.1",
  "plugins": [
    {
      "type": "flannel",
      "delegate": {
        "hairpinMode": true,
        "isDefaultGateway": true
      }
    },
    {
      "type": "portmap",
      "capabilities": {
        "portMappings": true
      }
    }
  ]
}

net-conf.json:
----
{
  "Network": "10.244.0.0/16",
  "EnableNFTables": false,
  "Backend": {
    "Type": "vxlan"
  }
}

kubectl describe ds -n kube-flannel kube-flannel-ds
...
    Mounts:
      /etc/cni/net.d from cni (rw)
      /etc/kube-flannel/ from flannel-cfg (rw)
    ...
    Mounts:
      /etc/kube-flannel/ from flannel-cfg (rw)
      /run/flannel from run (rw)
      /run/xtables.lock from xtables-lock (rw)
  Volumes:
   run:
    Type:          HostPath (bare host directory volume)
    Path:          /run/flannel
    HostPathType:  
   cni-plugin:
    Type:          HostPath (bare host directory volume)
    Path:          /opt/cni/bin
    HostPathType:  
   cni:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/cni/net.d
    HostPathType:  
   flannel-cfg:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-flannel-cfg
    Optional:  false
   xtables-lock:
    Type:               HostPath (bare host directory volume)
    Path:               /run/xtables.lock
    HostPathType:       FileOrCreate
  Priority Class Name:  system-node-critical
...

kubectl exec -it ds/kube-flannel-ds -n kube-flannel -c kube-flannel -- ls -l /etc/kube-flannel


# failed to find plugin "bridge" in path [/opt/cni/bin]
kubectl get pod -A -owide
kubectl describe pod -n kube-system -l k8s-app=kube-dns
  Warning  FailedCreatePodSandBox  35s               kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "786e9caec9c312a0b8af70e14865535575601d024ec02dbb581a1f5ac0b8bb06": plugin type="flannel" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory
  Warning  FailedCreatePodSandBox  23s               kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "6ccd4607d32cbb95be9ff40b97a436a07e5902e6c24d1e12aa68fefc2f8b548a": plugin type="flannel" failed (add): failed to delegate add: failed to find plugin "bridge" in path [/opt/cni/bin]

#
kubectl get pod -A -owide

#
docker cp bridge myk8s-control-plane:/opt/cni/bin/bridge
docker cp bridge myk8s-worker:/opt/cni/bin/bridge
docker cp bridge myk8s-worker2:/opt/cni/bin/bridge

#
docker exec -it myk8s-control-plane  ls -l /opt/cni/bin/
docker exec -it myk8s-worker  ls -l /opt/cni/bin/
docker exec -it myk8s-worker2 ls -l /opt/cni/bin/
for i in myk8s-control-plane myk8s-worker myk8s-worker2; do echo ">> node $i <<"; docker exec -it $i ls /opt/cni/bin/; echo; done
bridge	flannel  host-local  loopback  portmap	ptp

#
kubectl get pod -A -owide

 

확인

 

 

 

(참고) kubelet 및 기타 정보

# kubelet config 정보
kubectl describe cm -n kube-system kubelet-config
docker exec -it myk8s-control-plane cat /var/lib/kubelet/config.yaml
...
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
staticPodPath: /etc/kubernetes/manifests
...

#
docker exec -it myk8s-control-plane cat /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS="--container-runtime-endpoint=unix:///run/containerd/containerd.sock --node-ip=172.18.0.2 --node-labels= --pod-infra-container-image=registry.k8s.io/pause:3.9 --provider-id=kind://docker/myk8s/myk8s-control-plane"

for i in myk8s-control-plane myk8s-worker myk8s-worker2; do echo ">> node $i <<"; docker exec -it $i cat /var/lib/kubelet/kubeadm-flags.env; echo; done

#
for i in myk8s-control-plane myk8s-worker myk8s-worker2; do echo ">> node $i <<"; docker exec -it $i ls /etc/kubernetes/manifests; echo; done
>> node myk8s-control-plane <<
etcd.yaml  kube-apiserver.yaml	kube-controller-manager.yaml  kube-scheduler.yaml

>> node myk8s-worker <<

>> node myk8s-worker2 <<

#
kubectl describe pod -n kube-system kube-apiserver-myk8s-control-plane
kubectl describe pod -n kube-system kube-controller-manager-myk8s-control-plane
kubectl describe pod -n kube-system kube-scheduler-myk8s-control-plane

#
for i in myk8s-control-plane myk8s-worker myk8s-worker2; do echo ">> node $i <<"; docker exec -it $i pstree -aT; echo; done
for i in myk8s-control-plane myk8s-worker myk8s-worker2; do echo ">> node $i <<"; docker exec -it $i pstree -atl; echo; done

#
docker exec -it myk8s-control-plane  systemctl status kubelet -l --no-pager
docker exec -it myk8s-worker  systemctl status kubelet -l --no-pager
docker exec -it myk8s-worker2 systemctl status kubelet -l --no-pager

 

 

 

Flannel 정보 확인

#
kubectl get ds,pod,cm -n kube-flannel -owide
kubectl describe cm -n kube-flannel kube-flannel-cfg

# iptables 정보 확인
for i in filter nat mangle raw ; do echo ">> IPTables Type : $i <<"; docker exec -it myk8s-control-plane  iptables -t $i -S ; echo; done
for i in filter nat mangle raw ; do echo ">> IPTables Type : $i <<"; docker exec -it myk8s-worker  iptables -t $i -S ; echo; done
for i in filter nat mangle raw ; do echo ">> IPTables Type : $i <<"; docker exec -it myk8s-worker2 iptables -t $i -S ; echo; done


# flannel 정보 확인 : 대역, MTU
for i in myk8s-control-plane myk8s-worker myk8s-worker2; do echo ">> node $i <<"; docker exec -it $i cat /run/flannel/subnet.env ; echo; done
>> node myk8s-control-plane <<
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24 # 해당 노드에 파드가 배포 시 할당 할 수 있는 네트워크 대역
FLANNEL_MTU=65485 # MTU 지정
FLANNEL_IPMASQ=true # 파드가 외부(인터넷) 통신 시 해당 노드의 마스커레이딩을 사용
...

# 노드마다 할당된 dedicated subnet (podCIDR) 확인
kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}' ;echo

# 노드 정보 중 flannel 관련 정보 확인 : VXLAN 모드 정보와, VTEP 정보(노드 IP, VtepMac) 를 확인
kubectl describe node | grep -A3 Annotations

# 각 노드(?) 마다 bash 진입 후 아래 기본 정보 확인 : 먼저 worker 부터 bash 진입 후 확인하자
docker exec -it myk8s-worker        bash
docker exec -it myk8s-worker2       bash
docker exec -it myk8s-control-plane bash
----------------------------------------
# 호스트 네트워크 NS와 flannel, kube-proxy 컨테이너의 네트워크 NS 비교 : 파드의 IP와 호스트(서버)의 IP를 비교해보자!
lsns -p 1
lsns -p $(pgrep flanneld)
lsns -p $(pgrep kube-proxy)


# 기본 네트워크 정보 확인
ip -c -br addr
ip -c link | grep -E 'flannel|cni|veth' -A1
ip -c addr
ip -c -d addr show cni0     # 네트워크 네임스페이스 격리 파드가 1개 이상 배치 시 확인됨

ip -c -d addr show flannel.1
	5: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
    link/ether 1e:81:fc:d5:8a:42 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 65535
    vxlan id 1 local 192.168.100.10 dev enp0s8 srcport 0 0 dstport 8472 nolearning ttl auto ageing 300 udpcsum noudp6zerocsumtx noudp6zerocsumrx numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
    inet 172.16.0.0/32 brd 172.16.0.0 scope global flannel.1
    
brctl show
bridge name	bridge id		STP enabled	interfaces
cni0		8000.663bf746b6a8	no		vethbce1591c
							vethc17ba51b
							vethe6540260

# 라우팅 정보 확인 : 다른 노드의 파드 대역(podCIDR)의 라우팅 정보가 업데이트되어 있음을 확인		
ip -c route
default via 172.18.0.1 dev eth0 
10.244.0.0/24 via 10.244.0.0 dev flannel.1 onlink 
10.244.1.0/24 dev cni0 proto kernel scope link src 10.244.1.1 
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink 
172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.3 

# flannel.1 인터페이스를 통한 ARP 테이블 정보 확인 : 다른 노드의 flannel.1 IP와 MAC 정보를 확인
ip -c neigh show dev flannel.1

# 브리지 fdb 정보에서 해당 MAC 주소와 통신 시 각 노드의 enp0s8 
bridge fdb show dev flannel.1

# 다른 노드의 flannel.1 인터페이스로 ping 통신 : VXLAN 오버레이를 통해서 통신
ping -c 1 10.244.0.0
ping -c 1 10.244.1.0
ping -c 1 10.244.2.0

# iptables 필터 테이블 정보 확인 : 파드의 10.244.0.0/16 대역 끼리는 모든 노드에서 전달이 가능
iptables -t filter -S | grep 10.244.0.0

# iptables NAT 테이블 정보 확인 : 10.244.0.0/16 대역 끼리 통신은 마스커레이딩 없이 통신을 하며,
# 10.244.0.0/16 대역에서 동일 대역(10.244.0.0/16)과 멀티캐스트 대역(224.0.0.0/4) 를 제외한 나머지 (외부) 통신 시에는 마스커레이딩을 수행
iptables -t nat -S | grep 'flanneld masq' | grep -v '! -s'

----------------------------------------

 

ip확인

 

 

 

 

 

파드 2개 생성

# [터미널1,2] 워커 노드1,2 - 모니터링
docker exec -it myk8s-worker
docker exec -it myk8s-worker2
-----------------------------
watch -d "ip link | egrep 'cni|veth' ;echo; brctl show cni0"
-----------------------------

# [터미널3] cat & here document 명령 조합으로 즉석(?) 리소스 생성
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
  name: pod-1
  labels:
    app: pod
spec:
  nodeSelector:
    kubernetes.io/hostname: myk8s-worker
  containers:
  - name: netshoot-pod
    image: nicolaka/netshoot
    command: ["tail"]
    args: ["-f", "/dev/null"]
  terminationGracePeriodSeconds: 0
---
apiVersion: v1
kind: Pod
metadata:
  name: pod-2
  labels:
    app: pod
spec:
  nodeSelector:
    kubernetes.io/hostname: myk8s-worker2
  containers:
  - name: netshoot-pod
    image: nicolaka/netshoot
    command: ["tail"]
    args: ["-f", "/dev/null"]
  terminationGracePeriodSeconds: 0
EOF

# 파드 확인 : IP 확인
kubectl get pod -o wide

 

 

통신 흐름 이해 : 동일 노드 , 다른 노드 간

 

 

노드마다 파드를 만들고 통신

https://kschoi728.tistory.com/249
https://kschoi728.tistory.com/248

 

파드 Shell 접속 후 확인 & 패킷 캡처

kubectl exec -it pod-1 -- zsh
-----------------------------
ip -c addr show eth0


# GW IP는 어떤 인터페이스인가? (1) flannel.1 (2) cni0
ip -c route
route -n
ping -c 1 <GW IP>
ping -c 1 <pod-2 IP>  # 다른 노드에 배포된 파드 통신 확인
ping -c 8.8.8.8       # 외부 인터넷 IP   접속 확인
curl -s wttr.in/Seoul # 외부 인터넷 도메인 접속 확인
ip -c neigh
exit
-----------------------------

 

캡처 확인

# [터미널1,2] 워커 노드1,2
docker exec -it myk8s-worker
docker exec -it myk8s-worker2
-----------------------------
tcpdump -i cni0 -nn icmp
tcpdump -i flannel.1 -nn icmp
tcpdump -i eth0 -nn icmp
tcpdump -i eth0 -nn udp port 8472 -w /root/vxlan.pcap 
# CTRL+C 취소 후 확인 : ls -l /root/vxlan.pcap

conntrack -L | grep -i icmp
-----------------------------

# [터미널3]
docker cp myk8s-worker:/root/vxlan.pcap .
wireshark vxlan.pcap

Wireshark 에서 vxlan 기본 udp port 를 4789 를 사용합니다. Flannel 은 8472 를 사용하니 udp port 정보를 맞춰주시면 됩니다

 

 

실습 도움 툴 설치 : kube-ops-view , metrics-server , prometheus-stack ← 이후 스터디에서도 자주 사용 예정

- kube-ops-view

# helm show values geek-cookbook/kube-ops-view
helm repo add geek-cookbook https://geek-cookbook.github.io/charts/
helm install kube-ops-view geek-cookbook/kube-ops-view --version 1.2.2 --set service.main.type=NodePort,service.main.ports.http.nodePort=30000 --set env.TZ="Asia/Seoul" --namespace kube-system

# 설치 확인
kubectl get deploy,pod,svc,ep -n kube-system -l app.kubernetes.io/instance=kube-ops-view

# myk8s-control-plane 노드?에 배치
kubectl get nodes myk8s-control-plane -o jsonpath={.metadata.labels} | jq
kubectl describe node myk8s-control-plane | grep Taints
kubectl -n kube-system get deploy kube-ops-view -o yaml | k neat
kubectl -n kube-system edit deploy kube-ops-view
---
spec:
  ...
  template:
    ...
    spec:
      nodeSelector:
        mynode: control-plane
      tolerations:
      - key: "node-role.kubernetes.io/control-plane"
        operator: "Equal"
        effect: "NoSchedule"
---

kubectl -n kube-system get pod -o wide -l app.kubernetes.io/instance=kube-ops-view

# kube-ops-view 접속 URL 확인 (1.5 , 2 배율)
echo -e "KUBE-OPS-VIEW URL = http://localhost:30000/#scale=1.5"
echo -e "KUBE-OPS-VIEW URL = http://localhost:30000/#scale=2"


# (참고) 삭제
helm uninstall -n kube-system kube-ops-view

 

 

metrics-server

# metrics-server
wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml -O metric-server.yaml

# 아래 명령어는 macOS zsh 에서 sed 표현식이 조금 달라서 그런지 적용 안됨, yaml 파일 직접 수정 : - --kubelet-insecure-tls 추가
sed -i '' -r -e "/- --secure-port=10250/a\        - --kubelet-insecure-tls" metric-server.yaml
kubectl apply -f metric-server.yaml
kubectl get all -n kube-system -l k8s-app=metrics-server
kubectl get apiservices |egrep '(AVAILABLE|metrics)'

# 확인
kubectl top node
kubectl top pod -A --sort-by='cpu'
kubectl top pod -A --sort-by='memory'

# (참고) 삭제
kubectl delete -f metric-server.yaml

 

 

prometheus-stack

#
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts

# 파라미터 파일 생성
cat <<EOT > monitor-values.yaml
prometheus:
  service:
    type: NodePort
    nodePort: 30001

  prometheusSpec:
    podMonitorSelectorNilUsesHelmValues: false
    serviceMonitorSelectorNilUsesHelmValues: false
    nodeSelector:
      mynode: control-plane
    tolerations:
    - key: "node-role.kubernetes.io/control-plane"
      operator: "Equal"
      effect: "NoSchedule"


grafana:
  defaultDashboardsTimezone: Asia/Seoul
  adminPassword: kans1234

  service:
    type: NodePort
    nodePort: 30002
  nodeSelector:
    mynode: control-plane
  tolerations:
  - key: "node-role.kubernetes.io/control-plane"
    operator: "Equal"
    effect: "NoSchedule"

defaultRules:
  create: false
alertmanager:
  enabled: false

EOT

# 배포
kubectl create ns monitoring
helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack --version 62.3.0 -f monitor-values.yaml --namespace monitoring

# 확인
helm list -n monitoring

# Grafana 접속 계정 : admin / kans1234
echo -e "Prometheus URL = http://localhost:30001"
echo -e "Grafana URL = http://localhost:30002"


# (참고) helm 삭제
helm uninstall -n monitoring kube-prometheus-stack

 

 

Prometheus Target connection refused 관련 정상 동작 설정 : kind k8s yaml 파일에 이미 적용되어 있음

 - kube-controller-manager , kube-scheduler , etcd , kube-proxy

# Prometheus Target connection refused 관련 정상 동작 설정 : kube-proxy
kubectl edit cm -n kube-system kube-proxy
...
metricsBindAddress: "0.0.0.0:10249" ## 추가
...
kubectl rollout restart daemonset -n kube-system kube-proxy

# Prometheus Target connection refused 관련 정상 동작 설정 : kube-controller-manager , kube-scheduler , etcd
docker exec -it myk8s-control-plane bash
----------------------------------------
cat /etc/kubernetes/manifests/kube-controller-manager.yaml | grep bind-address
sed -i "s/bind-address=127.0.0.1/bind-address=0.0.0.0/g" /etc/kubernetes/manifests/kube-controller-manager.yaml

cat /etc/kubernetes/manifests/kube-scheduler.yaml | grep bind-address
sed -i "s/bind-address=127.0.0.1/bind-address=0.0.0.0/g" /etc/kubernetes/manifests/kube-scheduler.yaml

cat /etc/kubernetes/manifests/etcd.yaml | grep 127.0.0.1:2381
sed -i "s/127.0.0.1:2381/0.0.0.0:2381/g" /etc/kubernetes/manifests/etcd.yaml
----------------------------------------

 

 

설치 후 iptables 확인

# iptables 정보 확인
for i in filter nat mangle raw ; do echo ">> IPTables Type : $i <<"; docker exec -it myk8s-control-plane  iptables -t $i -S ; echo; done
for i in filter nat mangle raw ; do echo ">> IPTables Type : $i <<"; docker exec -it myk8s-worker  iptables -t $i -S ; echo; done
for i in filter nat mangle raw ; do echo ">> IPTables Type : $i <<"; docker exec -it myk8s-worker2 iptables -t $i -S ; echo; done

 

 

 

[심화] 파드에 IP 할당 되기 까지 (노드에 파드 IP 대역 할당 내용 포함) - 링크

 

How a Kubernetes Pod Gets an IP Address | Ronak Nathani

When a pod is scheduled on a kubernetes node, there are various interactions that result into a pod getting an IP address. This post goes into the details of how a pod gets an IP address and describes the interactions between various components - kubelet,

ronaknathani.com

 

1.K8S 클러스터에서 파드가 사용할 대역을 kubeadm 실행 시 설정 할 수 있다 ⇒ kind 시 기본값으로 pod-network-cidr=10.244.0.0/16 대역을 사용한다 - 링크

# init kubernetes
kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=<control-plane 컨테이너의 IP>

 

2.워커 노드가 마스터 노드에 조인(Join) 시점에 'kube-controller-manager' 파드가 pod-network-cidr 대역 중에서 각 '워커 노드'마다 충돌되지 않는 podCIDR 을 할당한다.

# 파드 확인
kubectl get pod -n kube-system -l component=kube-controller-manager

# kube-controller-manager 파드의 CIDR allocator 동작 로그 확인
kubectl logs -n kube-system -l component=kube-controller-manager | grep CIDR
I1031 15:21:13.763667       1 range_allocator.go:116] No Secondary Service CIDR provided. Skipping filtering out secondary service addresses.
I1031 15:21:13.964982       1 range_allocator.go:172] Starting range CIDR allocator

# 워커 노드마다 할당된 dedicated subnet (podCIDR) 확인
kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}'

 

3.이후 파드특정 워커 노드스케줄(할당)파트IP를 할당하는 과정

# 워커노드에서 확인
docker exec -it myk8s-worker bash
---------------------------------
# 기본 flannel 플러그인 정보 확인
cat /etc/cni/net.d/10-flannel.conflist | jq
{
  "name": "cbr0",
  "cniVersion": "0.3.1",
  "plugins": [
    {
      "type": "flannel",
      "delegate": {
        "hairpinMode": true,
        "isDefaultGateway": true
      }
    },
    {
      "type": "portmap",
      "capabilities": {
        "portMappings": true
      }
    }
  ]
}

# 해당 노드의 flannel 적용된 podCIDR 과 그와 network metadata 정보
cat /run/flannel/subnet.env
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.1.1/24
FLANNEL_MTU=65485
FLANNEL_IPMASQ=true

 

 

실습환경 삭제

kind delete cluster --name myk8s

'study > KANS 3기' 카테고리의 다른 글

KANS 3기 3주차 첫번째  (0) 2024.09.18
KANS 3기 3주차 실습환경 구축  (0) 2024.09.08
KANS 3기 2주차 첫번째  (0) 2024.09.07
KANS 3기 1주차 두번째  (0) 2024.08.31
KANS 3기 1주차 첫번째  (0) 2024.08.31