快速搭建Kubernetes集群(基于Ubuntu)

K8s集群部署有三种方式:

  • kubeadm (联网安装,本文以此为基础)
  • minikube(联网安装)
  • 二进制包(通常离线安装)

本文只使用两台虚拟机,一台是Master节点,一台是Node节点:

主机名 ip 系统 配置 工作目标
kube-master 10.71.11.84 Ubuntu 16.04 2C4G Master节点
kube-slave 10.71.11.83 Ubuntu 16.04 4C8G Node节点

准备工作

1. 修改主机名及配置主机名映射

例如对于Master节点

echo kube-master > /etc/hostname
vim /etc/hosts
 127.0.0.1    kube-master 10.71.11.83  kube-slave

在Node节点上执行类似操作,注意不要搞混ip和主机名

2. 配置软件源(这里用了中科大的源)

cat <<EOF > /etc/apt/sources.list.d/kubernetes.list deb [trusted=yes] http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial main EOF

3.关闭系统 Swap

因为Kubernetes v1.8+ 要求关闭系统 Swap

① 临时关闭swap swapoff -a

② 永久关闭swap vim /etc/fstab 注释所有带swap字样的行

更新源并安装kubeadm, kubectl, kubelet软件包

apt-get update -y && apt-get install -y kubelet kubeadm kubectl --allow-unauthenticated

安装docker(当前kubeadm不支持docker.ce)

apt-get install docker.io -y

加载内核模块(可选)

modprobe ip_vs modprobe ip_vs_rr modprobe ip_vs_wrr modprobe ip_vs_sh modprobe nf_conntrack_ipv4

正式开始

使用kubeadmin初始化master节点

这个下载镜像的过程涉及翻墙,因为会从gcr的站点下载容器镜像,不然初始化不成功, 这里要指定apiserver-advertise-address 及pod的虚拟子网pod-network-cidr

kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.71.11.84

如果出错,请根据错误信息查询错误详情,解决错误后执行kubeadmin reset还原至错误前状态

如果看到”Your Kubernetes master has initialized successfully!”字样,说明master节点已创建成功,可以进入下一步

配置kubectl

mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config

将slave节点加入到集群

在kube-slave上运行:

kubeadm join --token aa78f6.8b4cafc8ed26c34f --discovery-token-ca-cert-hash sha256:0fd95a9bc67a7bf0ef42da968a0d55d92e52898ec37c971bd77ee501d845b538  10.71.11.84:6443

上面的token和sha256都需要根据实际情况生成并填入,具体请看kubeadm生成的token过期后,集群增加节点

在Master 上运行kube get nodes, 可以看到结果如下:

NAME          STATUS   ROLES    AGE     VERSION kube-master   Ready    master   15h     v1.13.4 kube-slave    Ready    <none>   15h     v1.13.4

安装网络插件canal

canal官方文档参考,如下网址下载2个文件并且安装,其中一个是配置canal的RBAC权限,一个是部署canal的DaemonSet。但笔者是从这里(Installing a pod network add-on)参考的,根据kubeadm init时用到的--pod-network-cidr=10.244.0.0/16,所以选择了canal

kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/canal/rbac.yaml kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/canal/canal.yaml

可以看到输出:

clusterrole.rbac.authorization.k8s.io "calico" created clusterrole.rbac.authorization.k8s.io "flannel" created clusterrolebinding.rbac.authorization.k8s.io "canal-flannel" created clusterrolebinding.rbac.authorization.k8s.io "canal-calico" created configmap "canal-config" created daemonset.extensions "canal" created customresourcedefinition.apiextensions.k8s.io "felixconfigurations.crd.projectcalico.org" created customresourcedefinition.apiextensions.k8s.io "bgpconfigurations.crd.projectcalico.org" created customresourcedefinition.apiextensions.k8s.io "ippools.crd.projectcalico.org" created customresourcedefinition.apiextensions.k8s.io "clusterinformations.crd.projectcalico.org" created customresourcedefinition.apiextensions.k8s.io "globalnetworkpolicies.crd.projectcalico.org" created customresourcedefinition.apiextensions.k8s.io "networkpolicies.crd.projectcalico.org" created serviceaccount "canal" created

执行如下命令,可以就可以查看canal的安装状态

# kubectl get pod -n kube-system -o wide NAME                                    READY   STATUS    RESTARTS   AGE     IP            NODE          NOMINATED NODE   READINESS GATES canal-547kn                             3/3     Running   34         47h     10.71.11.84   kube-master   <none>           <none> canal-j76k9                             3/3     Running   15         47h     10.71.11.83   kube-slave    <none>           <none> coredns-86c58d9df4-5mnm4                1/1     Running   14         2d15h   10.244.0.45   kube-master   <none>           <none> coredns-86c58d9df4-b76qw                1/1     Running   14         2d15h   10.244.0.44   kube-master   <none>           <none> etcd-kube-master                        1/1     Running   15         2d15h   10.71.11.84   kube-master   <none>           <none> kube-apiserver-kube-master              1/1     Running   17         2d15h   10.71.11.84   kube-master   <none>           <none> kube-controller-manager-kube-master     1/1     Running   19         2d15h   10.71.11.84   kube-master   <none>           <none> kube-proxy-q2v5x                        1/1     Running   11         2d15h   10.71.11.83   kube-slave    <none>           <none> kube-proxy-sqvbh                        1/1     Running   16         2d15h   10.71.11.84   kube-master   <none>           <none> kube-scheduler-kube-master              1/1     Running   15         2d15h   10.71.11.84   kube-master   <none>           <none>

第一步大功告成!

发表评论