14.用 kubeadm 搭建集群环境

2020/06/25

k8s-02-使用kubeadm安装k8s集群

安装kubeadm

以下操作在所有节点执行。

配置阿里云yum源

#由于官方源国内无法访问,这里使用阿里云yum源进行替换:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装kubeadm、kubelet、kubectl,注意这里默认安装当前最新版本v1.15.2:

安装和启动kubelet

yum install -y kubeadm kubelet kubectl
systemctl enable kubelet && systemctl start kubelet

编写kubeadm-config文件

cat <<EOF > /root/k8s/kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v1.15.2 # 指定1.15.2最新版本
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers # 指定镜像源为阿里源
networking:
  podSubnet: 10.244.0.0/16 # 计划使用flannel网络插件,指定pod网段及掩码
EOF

初始化master节点(仅在master上面执行)

kubeadm  config images pull  --config kubeadm-config.yaml  # 通过阿里源预先拉镜像
kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs

执行完成以后,可以看到下面的一段输出:

[root@master k8s]# kubeadm init --config kubeadm.yaml
[init] Using Kubernetes version: v1.15.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 10.7.80.11]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [10.7.80.11 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [10.7.80.11 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 38.003213 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 6ck4mj.goom35tebia1i7ji
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.7.80.11:6443 --token 6ck4mj.goom35tebia1i7ji \
    --discovery-token-ca-cert-hash sha256:c5710dea690a30bcecb23ffb0efb756caf7695717a40de6d524bc1548d8deb5a
[root@master k8s]#

配置kubectl客户端

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

#设置指令自动补全
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc

查看k8s pod的运行情况,发现coreDns处于pending的状态

kubectl get pods -A

通过执行kubectl describe master-1,可以看到这样的提示:

runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

原来是因为网络插件没有就绪导致的。所以 ,我们来安装一波

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml

需要注意的是quay.io/coreos/flannel:v0.11.0-amd64来不下来,需要替换成我自己编译好的镜像 registry.cn-shanghai.aliyuncs.com/k8s-repo/flannel:v0.11.0-amd64

镜像拉取下来后,我们再看

[root@master k8s]# kubectl get pods -A
NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE
kube-system   coredns-6967fb4995-k4z94         1/1     Running   0          5d11h
kube-system   coredns-6967fb4995-wxrwp         1/1     Running   0          5d11h
kube-system   etcd-master                      1/1     Running   0          5d11h
kube-system   kube-apiserver-master            1/1     Running   0          5d11h
kube-system   kube-controller-manager-master   1/1     Running   3          5d11h
kube-system   kube-flannel-ds-amd64-cgcnf      1/1     Running   0          5d10h
kube-system   kube-flannel-ds-amd64-hr644      1/1     Running   0          5d10h
kube-system   kube-proxy-dk4bv                 1/1     Running   0          5d10h
kube-system   kube-proxy-smzrb                 1/1     Running   0          5d11h
kube-system   kube-scheduler-master            1/1     Running   3          5d11h

然后再从节点上面执行

kubeadm join 10.7.80.11:6443 --token 6ck4mj.goom35tebia1i7ji \
    --discovery-token-ca-cert-hash sha256:c5710dea690a30bcecb23ffb0efb756caf7695717a40de6d524bc1548d8deb5a

过一段时间后查看,每个节点都是ready的状态,到此为止,我们的k8s的集群搭建完毕

[root@master k8s]# kubectl get nodes
NAME        STATUS   ROLES    AGE     VERSION
master      Ready    master   5d11h   v1.15.2
worker-01   Ready    <none>   5d10h   v1.15.2

最新安装版本日志输出:

Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.7.80.11:6443 --token o0zwxm.q5azfo1ls7uacfm5 \
    --discovery-token-ca-cert-hash sha256:835a328636a5b2b6e3dcb2f3928802e93f02e85eb39933ffaa8fcf317f7c817f

注意: dashboard和nfsclient的镜像都使用官方镜像 后面搭建的,都使用官方的镜像

Show Disqus Comments

Search

    Post Directory