1、配置规划
节点类型 | 节点名称 | IP地址 | 操作系统 | 服务器配置 |
---|---|---|---|---|
Master | master1 | 192.168.0.1 | CentOS 7.9 | 8核CPU,16G内存,100G硬盘 |
Node | node1 | 192.168.0.2 | CentOS 7.9 | 8核CPU,16G内存,100G硬盘 |
Node | node2 | 192.168.0.3 | CentOS 7.9 | 8核CPU,16G内存,100G硬盘 |
2、修改Hostname(Master & Node)
$ hostnamectl set-hostname master1
$ hostnamectl set-hostname node1
$ hostnamectl set-hostname node2
$ vim /etc/hosts
192.168.0.1 master1
192.168.0.2 node1
192.168.0.3 node2
3、设置免密登陆(Master & Node)
1)Node1
$ ssh-keygen -t rsa
$ cp ~/.ssh/id_rsa.pub ~/.ssh/node1_id_rsa.pub
$ scp ~/.ssh/node1_id_rsa.pub master:~/.ssh/
2)Node2
$ ssh-keygen -t rsa
$ cp ~/.ssh/id_rsa.pub ~/.ssh/node2_id_rsa.pub
$ scp ~/.ssh/node2_id_rsa.pub master:~/.ssh/
3)Master
$ ssh-keygen -t rsa
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
$ cat ~/.ssh/slave1_id_rsa.pub >> ~/.ssh/authorized_keys
$ cat ~/.ssh/slave2_id_rsa.pub >> ~/.ssh/authorized_kyes
-- 拷贝文件至slave1及slave2
$ scp ~/.ssh/authorized_keys node1:~/.ssh
$ scp ~/.ssh/authorized_keys node2:~/.ssh
4、安装基础组件(Master & Node)
$ yum -y install yum-utils
$ yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
$ yum makecache fast
$ yum -y install docker-ce
$ systemctl start docker
yum-config-manager 用法参考:https://www.cnblogs.com/G-Aurora/p/13166168.html
5、配置安装源地址(Master & Node)
$ vim /etc/yum.repos.d/Kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
6、安装Kubernetes的关键组件(Master & Node)
$ setenforce 0
# 安装最新版使用 yum install -y kubelet kubeadm kubectl
$ yum install -y kubelet-1.15.12 kubeadm-1.15.12 kubectl-1.15.12
$ systemctl enable kubelet && systemctl start kubelet
7、查看镜像版本(Master)
$ kubeadm config images list
I1101 23:32:57.801551 16075 version.go:248] remote version is much newer: v1.22.3; falling back to: stable-1.15
k8s.gcr.io/kube-apiserver:v1.15.12
k8s.gcr.io/kube-controller-manager:v1.15.12
k8s.gcr.io/kube-scheduler:v1.15.12
k8s.gcr.io/kube-proxy:v1.15.12
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1
8、手动拉取镜像(Master & Node)
$ docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.15.12
$ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.15.12 k8s.gcr.io/kube-apiserver:v1.15.12
$ docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.15.12
$ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.15.12 k8s.gcr.io/kube-controller-manager:v1.15.12
$ docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.15.12
$ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.15.12 k8s.gcr.io/kube-scheduler:v1.15.12
$ docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.15.12
$ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.15.12 k8s.gcr.io/kube-proxy:v1.15.12
$ docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
$ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
$ docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.10
$ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
$ docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.3.1
$ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
9、指定版本初始化(Master)
$ kubeadm init --kubernetes-version=1.15.12 --pod-network-cidr 10.244.0.0/16
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.0.1:6443 --token d3thnv.vydec6lgryigrtm3 \
--discovery-token-ca-cert-hash sha256:123127b71ec0f8072e9f2255420a638bc5960147efadd5fad08625e2fab0ab0f
10、执行初始化指令(Master)
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
11、查看集群状态(Master)
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master1 NotReady master 10m v1.15.12
12、根据提示将Node加入集群(Node)
$ kubeadm join 192.168.0.1:6443 --token d3thnv.vydec6lgryigrtm3 \
--discovery-token-ca-cert-hash sha256:123127b71ec0f8072e9f2255420a638bc5960147efadd5fad08625e2fab0ab0f
[preflight] Running pre-flight checks
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.10. Latest validated version: 18.09
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
13、安装Pod网络插件(Master)
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
14、将Master的admin.conf拷到Node1和Node2节点上(Master)
$ scp /etc/kubernetes/admin.conf root@node1:/etc/kubernetes/
$ scp /etc/kubernetes/admin.conf root@node2:/etc/kubernetes/
15、修改Node1和Node2节点环境变量(Node)
# 修改环境变量
$ echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
# 更新环境变量
$ source /etc/profile
# 查看Node
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 17m v1.15.12
node1 Ready <none> 15m v1.15.12
node2 Ready <none> 15m v1.15.12
16、当Node处理NotReady时,可以通过指令查看报错日志(Master & Node)
# 通过kube-system查看pod状态
$ kubectl get pod -n kube-system
# 通过kubectl describe pod -n kub-system <服务名> 查看报错日志
$ kubectl describe pod -n kube-system kube-flannel-ds-t2jqq