221 lines
6.8 KiB
Markdown
221 lines
6.8 KiB
Markdown
需要一个master节点,一个worker节点
|
||
|
||
主节点需要组件
|
||
- docker(也可以是其他容器运行时)
|
||
- kubectl 集群命令行交互工具
|
||
- kubeadm 集群初始化工具
|
||
|
||
|
||
工作节点需要组件
|
||
- docker(也可以是其他容器运行时)
|
||
- kubelet 管理 Pod 和容器,确保他们健康稳定运行。
|
||
- kube-proxy 网络代理,负责网络相关的工作
|
||
|
||
|
||
```
|
||
# 关闭selinux
|
||
# 临时禁用selinux
|
||
setenforce 0
|
||
|
||
# 禁用交换分区
|
||
swapoff -a
|
||
# 永久禁用,打开/etc/fstab注释掉swap那一行。
|
||
sed -i 's/.*swap.*/#&/' /etc/fstab
|
||
|
||
|
||
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
|
||
|
||
# 删除之前的cni 网络信息
|
||
ip link set cni0 down
|
||
brctl delbr cni0
|
||
|
||
# 安装依赖
|
||
yum install -y epel-release conntrack ipvsadm ipset jq sysstat curl iptables libseccomp
|
||
|
||
# 关闭防火墙
|
||
systemctl disable firewalld && systemctl stop firewalld
|
||
|
||
# 设置iptables
|
||
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT
|
||
|
||
# 禁用 SELinux
|
||
# 永久关闭 修改/etc/sysconfig/selinux文件设置
|
||
sed -i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinux
|
||
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
|
||
|
||
# 加载内核模块
|
||
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
|
||
#!/bin/bash
|
||
modprobe -- ip_vs
|
||
modprobe -- ip_vs_rr
|
||
modprobe -- ip_vs_wrr
|
||
modprobe -- ip_vs_sh
|
||
modprobe -- nf_conntrack_ipv4
|
||
modprobe -- br_netfilter
|
||
EOF
|
||
|
||
# 修改访问权限
|
||
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules
|
||
|
||
# 配置内核参数,将桥接的IPv4流量传递到iptables的链
|
||
cat << EOF | tee /etc/sysctl.d/k8s.conf
|
||
net.bridge.bridge-nf-call-iptables=1
|
||
net.bridge.bridge-nf-call-ip6tables=1
|
||
net.ipv4.ip_forward=1
|
||
net.ipv4.tcp_tw_recycle=0
|
||
vm.swappiness=0
|
||
vm.overcommit_memory=1
|
||
vm.panic_on_oom=0
|
||
fs.inotify.max_user_watches=89100
|
||
fs.file-max=52706963
|
||
fs.nr_open=52706963
|
||
net.ipv6.conf.all.disable_ipv6=1
|
||
net.netfilter.nf_conntrack_max=2310720
|
||
EOF
|
||
|
||
sysctl -p /etc/sysctl.d/k8s.conf
|
||
|
||
|
||
# docker 安装:
|
||
# 安装docker所需的工具
|
||
yum install -y yum-utils device-mapper-persistent-data lvm2
|
||
|
||
# 配置阿里云的docker源
|
||
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
|
||
|
||
# 安装 docker-ce docker-ce-cli
|
||
yum makecache fast
|
||
yum install -y docker-ce docker-ce-cli
|
||
|
||
# 安装完成后配置启动时的命令,否则 docker 会将 iptables FORWARD chain 的默认策略设置为DROP
|
||
sed -i "13i ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT" /usr/lib/systemd/system/docker.service
|
||
|
||
# 启动docker
|
||
systemctl enable docker && systemctl start docker
|
||
|
||
# Kubernetes 默认设置cgroup驱动位为 "systemd" ,而 Docker 服务的cgroup驱动默认为 "cgroupfs",
|
||
# insecure-registries 里面的ip和端口改成harbor所在的ip和端口
|
||
# 建议将其修改为 “systemd", 与 Kubernetes 保持一致 ##
|
||
tee /etc/docker/daemon.json <<-'EOF'
|
||
{
|
||
"registry-mirrors": ["https://0hz7oz4n.mirror.aliyuncs.com"],
|
||
"exec-opts": ["native.cgroupdriver=systemd"],
|
||
"insecure-registries":["192.168.1.11:80"],
|
||
"log-driver": "json-file",
|
||
"log-opts": {
|
||
"max-size": "100m","max-file": "1"
|
||
},
|
||
"storage-driver": "overlay2",
|
||
"storage-opts": [
|
||
"overlay2.override_kernel_check=true"
|
||
]
|
||
}
|
||
EOF
|
||
|
||
|
||
# 重启Docker 服务
|
||
sudo systemctl daemon-reload
|
||
sudo systemctl restart docker
|
||
|
||
|
||
# 添加阿里云的K8S源: k8s-master k8s-node1 k8s-node2
|
||
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
|
||
[kubernetes]
|
||
name=Kubernetes
|
||
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
|
||
enabled=1
|
||
gpgcheck=0
|
||
repo_gpgcheck=0
|
||
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
|
||
EOF
|
||
|
||
# 检查一下最新的1.23的版本
|
||
yum list --showduplicates kubeadm --disableexcludes=kubernetes
|
||
|
||
|
||
# 每个节点分别设置对应主机名
|
||
hostnamectl set-hostname k8s-master
|
||
hostnamectl set-hostname k8s-node1
|
||
hostnamectl set-hostname k8s-node2
|
||
|
||
# 所有节点都修改 hosts
|
||
vi /etc/hosts
|
||
192.168.1.5 k8s-master
|
||
192.168.193.128 k8s-node1
|
||
192.168.1.18 k8s-node2
|
||
|
||
|
||
# 安装kubeadm kubectl kubelet 1.23.12(大于等于24的话,docker就不能愉快工作了)
|
||
# 主节点
|
||
yum install -y kubelet-1.23.12 kubectl-1.23.12 kubeadm-1.23.12
|
||
|
||
# node节点
|
||
yum install -y kubelet-1.23.12 kubectl-1.23.12 kubeadm-1.23.12
|
||
echo 'KUBELET_EXTRA_ARGS="--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice"' > /etc/sysconfig/kubelet
|
||
|
||
|
||
# 启动kubelet服务
|
||
systemctl enable kubelet && systemctl start kubelet
|
||
|
||
# 查看已经安装的版本
|
||
kubelet --version
|
||
```
|
||
|
||
二. 初始化 k8s-master 环境 或 kubeadm reset 后都要按以下步骤执行, IP地址需根据实际情况调整
|
||
|
||
|
||
```
|
||
# 因为需要下载 images 需要一些时间, 这里会等待一会儿, IP 按当前网络环境自行修改:
|
||
# 说明: https://kubernetes.io/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/
|
||
kubeadm init \
|
||
--kubernetes-version=1.23.12 \
|
||
--apiserver-advertise-address=192.168.1.5 \
|
||
--image-repository registry.aliyuncs.com/google_containers \
|
||
--service-cidr=10.1.0.0/16 \
|
||
--pod-network-cidr=10.244.0.0/16
|
||
```
|
||
|
||
|
||
命令说明:
|
||
|
||
–pod-network-cidr: 定义pod网段为:10.244.0.0/16
|
||
|
||
–apiserver-advertise-address:master主机内网IP地址
|
||
|
||
–image-repository:指定阿里云镜像仓库地址。由于kubeadm 默认从官网http://k8s.grc.io下载所需镜像,国内无法访问,因此需要通过–image-repository指定阿里云镜像仓库地址。
|
||
|
||
集群初始化会出现如下结果
|
||
|
||
Your Kubernetes control-plane has initialized successfully!
|
||
|
||
|
||
上面安装完后,会提示你输入如下命令,复制粘贴过来,执行即可。
|
||
|
||
```
|
||
# 上面安装完成后,k8s会提示你输入如下命令,执行
|
||
mkdir -p $HOME/.kube
|
||
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
|
||
sudo chown $(id -u):$(id -g) $HOME/.kube/config
|
||
```
|
||
|
||
如果是root用户请执行: 最好加入到 .zshrc .bashrc, 此处有疑问.... TODO:
|
||
```
|
||
export KUBECONFIG=/etc/kubernetes/admin.conf
|
||
```
|
||
|
||
node机上执行: 此时执行会报错并且执行不成功, 请稍等执行好flannel 再到node机上执行: 如忘记: 可使用命令 kubeadm token create --print-join-command 查看:
|
||
```
|
||
kubeadm join 192.168.1.5:6443 --token zxk5dz.es4kdp9ahxydsk5i \
|
||
--discovery-token-ca-cert-hash sha256:a5f29decea10aecf53cb174b22120095dade42303b34b5d4f5ea7e0bf4fb00b8
|
||
```
|
||
|
||
配置: kube-flannel.yml
|
||
```
|
||
# 服务器没办法访问的话,直接把这个地址在浏览器打开,然后下载下来也可以
|
||
curl -k -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
|
||
kubectl apply -f kube-flannel.yml
|
||
```
|
||
|
||
重要: ok, 此时去node机上执行 kubeadm join 即可, 这时在node执行 join 时会自动生 /run/flannel/subnet.env 文件, 而不会再报找不到文件的错误!
|
||
|