본문 바로가기

리눅스

쿠버네티스(Kubernetes) 클러스터를 재구성하는 방법

반응형

쿠버네티스(Kubernetes) 클러스터를 재구성하는 방법

1. 마스터 노드 초기화

  • 현재 클러스터 제거 : 기존 클러스터를 제거해야 합니다.
systemctl stop kubelet
docker rm -f $(docker ps -aq)
docker rmi -f $(docker images -q)
systemctl restart docker
rm -f  ~/.kube/config
systemctl start kubelet
  • 마스터 노드 초기화 : 마스터 노드를 초기화하려면 kubeadm 도구를 사용할 수 있습니다.
kubeadm reset
$ kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W1102 23:53:18.131653   25270 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
  • 마스터 노드 설정 : 마스터 노드의 초기화 후, 새로운 클러스터를 구성하기 위해 kubeadm init 명령어를 사용합니다.
kubeadm init --apiserver-advertise-address=10.255.255.99 --pod-network-cidr=10.244.0.0/16
$ kubeadm init --apiserver-advertise-address=10.255.255.99 --pod-network-cidr=10.244.0.0/16
W1103 00:12:45.373001    2643 version.go:102] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get "https://storage.googleapis.com/kubernetes-release/release/stable-1.txt": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
W1103 00:12:45.373165    2643 version.go:103] falling back to the local client version: v1.19.2
W1103 00:12:45.373544    2643 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
###
### 마스터를 설정하는데 오래 걸림
###
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [bk8sm1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.255.255.99]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [bk8sm1 localhost] and IPs [10.255.255.99 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [bk8sm1 localhost] and IPs [10.255.255.99 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 16.506334 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node bk8sm1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node bk8sm1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: lhquwd.1znvb3b2ij00bs5a
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.255.255.99:6443 --token lhquwd.1znvb3b2ij00bs5a \
    --discovery-token-ca-cert-hash sha256:29a282e0ba336c0301d47426c95ce4c41366520ed63774920327d81af5b1ba24
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

이 명령어는 클러스터를 초기화하고 구성할 것입니다. 실행 결과에 따라 제공되는 명령어를 워커 노드에 실행하세요.

728x90

2. 워커 노드 초기화

더보기

---

systemctl stop kubelet
docker rm -f $(docker ps -aq)
docker rmi -f $(docker images -q)
systemctl restart docker
systemctl start kubelet
kubeadm reset
$ kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W1103 00:19:46.034507    5968 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

---

 

  • 각 워커 노드를 초기화하기 위해 제공된 명령어를 실행합니다. 이 명령어는 마스터 노드에서 생성된 것입니다. 워커 노드에서 실행해야 합니다.
kubeadm join 10.255.255.99:6443 --token lhquwd.1znvb3b2ij00bs5a --discovery-token-ca-cert-hash sha256:29a282e0ba336c0301d47426c95ce4c41366520ed63774920327d81af5b1ba24
$ kubeadm join 10.255.255.99:6443 --token lhquwd.1znvb3b2ij00bs5a     --discovery-token-ca-cert-hash sha256:29a282e0ba336c0301d47426c95ce4c41366520ed63774920327d81af5b1ba24
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
  • 초기화가 완료된 워커 노드를 마스터 노드에서 확인합니다.

3. 클러스터 구성 확인

  • 마스터 노드에서 다음 명령어를 사용하여 클러스터 구성을 확인합니다:
kubectl get nodes
$ k get nodes
NAME     STATUS   ROLES    AGE     VERSION
bk8sm1   Ready    master   7m5s    v1.19.2
bk8sn1   Ready    <none>   2m54s   v1.19.2
bk8sn2   Ready    <none>   50s     v1.19.2
bk8sn3   Ready    <none>   18s     v1.19.2

이 명령어는 클러스터 내의 모든 노드를 나열합니다. 노드가 모두 Ready 상태여야 합니다.

 

  • 필요한 경우 네트워크 플러그인(CNI)을 설치합니다. Calico, Flannel, 등이 일반적으로 사용됩니다. CNI를 설치하여 Pod 간 통신을 활성화합니다.

참고

  • 클러스터 재구성 시, 현재 클러스터에 배포한 어플리케이션, 설정 등이 모두 삭제됩니다. 주의해서 진행하십시오.
  • 클러스터 구성 및 관리에 대한 더 자세한 정보는 Kubernetes 공식 문서(https://kubernetes.io/docs/home/)를 참조하십시오.
  • 쿠버네티스 클러스터 구성은 환경에 따라 다를 수 있으며, 네트워크 플러그인, 스토리지 클래스, 인증 및 보안 설정 등을 고려해야 합니다.

 

728x90
반응형