安装kubernetes

kubernetes安装(1.14.0)

环境初始化

配置软件源

添加kubernetes源(阿里源)

[kubernetes]
name=Kubernetes Repositor
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0

添加Docker源(清华源)

# 下载repo文件
wget -O /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/centos/docker-ce.repo
# 替换链接
sed -i ‘s+download.docker.com+mirrors.tuna.tsinghua.edu.cn/docker-ce+‘ /etc/yum.repos.d/docker-ce.repo
# 更新缓存
yum makecache

安装软件包

yum install -y yum-utils device-mapper-persistent-data lvm2  # docker依赖
yum install docker-ce-18.09.9-3.el7 docker-ce-cli-18.09.9-3.el7 kubectl-1.14.0-0 kubeadm-1.14.0-0 kubelet-1.14.0-0 -y # 安装kubernetes

关闭防火墙&SELINUX

systemctl stop firewalld
sed -i ‘/^SELINUX=/s/.*/SELINUX=disabled/‘ /etc/selinux/config

关闭swap交换分区

swapoff -a 
sed -i ‘/swap/s/^/#/‘ /etc/fstab

修改内核参数

net.bridge.bridge-nf-call-iptables = 1 
net.bridge.bridge-nf-call-ip6tables = 1

sysctl -p

修改主机名

hostnamectl set-hostname k8s-master1  # k8s-master1 修改成对应的主机名
echo "127.0.0.1	  `hostname`" >> /etc/hosts

重启

配置master节点

获取默认配置

kubeadm config print init-defaults > init-default.yaml

修改配置文件

advertiseAddress: 172.20.1.101  # 地址为本机的地址 默认为1.2.3.4
imageRepository: docker.io/dustise  # 修改镜像仓库地址 默认为k8s.gcr.io
podSubnet: "192.168.0.0/16"  # 书上没讲,还不会,猜测应该是pod的地址范围 
# 另存为init-config.yaml

拉取相关镜像

强烈建议配置Docker镜像加速

kubeadm config images pull --config=init-config.yaml

运行kubeadm init安装Master节点

kubeadm init --config=init-config.yaml
# 屏幕输出的内容
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube  
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.20.1.101:6443 --token abcdef.0123456789abcdef     --discovery-token-ca-cert-hash sha256:f222ea68e359406c5b1f508eec6bfaddcffd47ea997113f0afcd62ef337a168a

根据输出提示执行相关命令

mkdir -p $HOME/.kube  
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

配置Node节点

生成加入集群的配置文件

cat join.config.yaml 
apiVersion: kubeadm.k8s.io/v1beta1
kind: JoinConfiguration
discovery:
  bootstrapToken:
    apiServerEndpoint: 172.20.1.101:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
  tlsBootstrapToken: abcdef.0123456789abcdef
# token和 tlsBootstrapToken的值来自 master节点执行kubeadm init输出信息的最后一行

将Node节点加入集群

kubeadm join --config=join.config.yaml

查询集群节点(master节点执行)

kubectl get nodes 
NAME                 STATUS     ROLES    AGE   VERSION
kubernetes-master1   NotReady   master   50m   v1.14.0
kubernetes-node1     NotReady   <none>   29m   v1.14.0
# NotReady因为还没有安装网络插件。

安装网络插件(calico)

wget https://docs.projectcalico.org/v3.7/manifests/calico.yaml
grep serviceSubnet: init-config.yaml
vim calico.yaml
610             - name: CALICO_IPV4POOL_CIDR
611               value: "10.96.0.0/12"
612             # Disable file logging so `kubectl logs` works.
kubectl apple -f calico.yaml

验证集群状态

kubectl get nodes
NAME                 STATUS   ROLES    AGE   VERSION
kubernetes-master1   Ready    master   85m   v1.14.0
kubernetes-node1     Ready    <none>   64m   v1.14.0
kubectl get pods --all-namespaces 
NAMESPACE     NAME                                         READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-f6ff9cbbb-x79pc      1/1     Running   0          5m57s
kube-system   calico-node-fvhs4                            1/1     Running   0          5m57s
kube-system   calico-node-x7lvl                            1/1     Running   0          5m57s
kube-system   coredns-6897bd7b5-ckxb2                      1/1     Running   0          85m
kube-system   coredns-6897bd7b5-lztmj                      1/1     Running   0          85m
kube-system   etcd-kubernetes-master1                      1/1     Running   0          85m
kube-system   kube-apiserver-kubernetes-master1            1/1     Running   0          84m
kube-system   kube-controller-manager-kubernetes-master1   1/1     Running   0          84m
kube-system   kube-proxy-7dctv                             1/1     Running   0          65m
kube-system   kube-proxy-rrqcx                             1/1     Running   0          85m
kube-system   kube-scheduler-kubernetes-master1            1/1     Running   0          84m

相关推荐