树莓派K8S集群搭建
一、环境准备
1、我的树莓派配置清单
master:树莓派4b 4G内存、16G存储。
node1/node2:树莓派4b 8G内存,32G存储。
系统:树莓派 64位系统 GNU/Linux 11
注意:以下操作均在root用户下操作。
2、基本配置
三台设备均要操作
2.1、时间同步
三台主机的时间要同步
2.2、关闭防火墙
树莓派默认防火墙规则是放开所有,可以不用管。
2.3、禁用swap分区
临时禁用:swapoff -a 或者 dphys-swapfile swapoff
永久禁用:nano /etc/dphys-swapfile

重载swap配置文件并查看swap的值

2.4、为三台主机添加hosts文件
192.168.31.85 master
192.168.31.70 pinode1
192.168.31.252 pinode2
2.5、开启ip_forword转发
临时生效:echo "1" > /proc/sys/net/ipv4/ip_forward
永久生效:nano /etc/sysctl.conf

执行sysctl -p命令使其生效。
2.6、让树莓派支持cgroup
https://www.cnblogs.com/zhangzhide/p/16414728.html # 相关文档
使用该文档中的方式2:编辑/boot/cmdline.txt
注意:这步很重要,否则树莓派主节点无法初始化,从节点无法加入集群。
二、软件安装
1、配置kubernetes的源和docker源
https://mirrors.huaweicloud.com/ # 使用华为云的源
关于k8s和docker源的配置方法,华为云有详细说明,不做赘述。
如果出现公钥不可用的情况,就需要想办法验证公钥,比如下面的示例:
root@pinode1:~# apt-get update
Get:1 https://repo.huaweicloud.com/kubernetes/apt kubernetes-xenial InRelease [8,993 B]
Hit:2 http://security.debian.org/debian-security bullseye-security InRelease Hit:3 http://deb.debian.org/debian bullseye InRelease
Hit:4 http://deb.debian.org/debian bullseye-updates InRelease
Err:1 https://repo.huaweicloud.com/kubernetes/apt kubernetes-xenial InRelease
The following signatures couldn't be verified because the public key is not available: NO_PUBKEY B53DC80D13EDEF05
Hit:5 http://archive.raspberrypi.org/debian bullseye InRelease
Reading package lists... Done
W: GPG error: https://repo.huaweicloud.com/kubernetes/apt kubernetes-xenial InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY B53DC80D13EDEF05
E: The repository 'https://repo.huaweicloud.com/kubernetes/apt kubernetes-xenial InRelease' is not signed.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
解决办法:
root@pinode1:~# apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv B53DC80D13EDEF05
Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
Executing: /tmp/apt-key-gpghome.dlnfR5rnS6/gpg.1.sh --keyserver hkp://keyserver.ubuntu.com:80 --recv B53DC80D13EDEF05
gpg: key B53DC80D13EDEF05: 1 duplicate signature removed
gpg: key B53DC80D13EDEF05: public key "Rapture Automatic Signing Key (cloud-rapture-signing-key-2022-03-07-08_01_01.pub)" imported
gpg: Total number processed: 1
gpg: imported: 1
2、在master和node主机上均要安装
apt-get install kubelet kubeadm kubectl containerd.io -y
root@master:~ # kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.1", GitCommit:"8f94681cd294aa8cfd3407b8191f6c70214973a4", GitTreeState:"clean", BuildDate:"2023-01-18T15:56:50Z", GoVersion:"go1.19.5", Compiler:"gc", Platform:"linux/arm64"}
root@master:~ # containerd -version
containerd containerd.io 1.6.18 2456e983eb9e37e47538f59ea18f2043c9a73640
3、生成containerd配置文件
containerd config default > /etc/containerd/config.toml
并且要修改两处地方:
SystemdCgroup = false 改为 SystemdCgroup = true
sandbox_image = "registry.k8s.io/pause:3.6" 改为 sandbox_image = "registry.aliyuncs.com/k8sxio/pause:3.6"
主要是为了支持Cgroup,以及切换为国内源才能将所需镜像下载下来,否则会一直超时。
并且重启containerd服务:systemctl restart containerd
4、开启bridge-nf-call-iptables
nano /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
加载配置文件:
root@pinode1:/etc/sysctl.d# sysctl -p /etc/sysctl.d/k8s.conf
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory
net.ipv4.ip_forward = 1
root@pinode1:/etc/sysctl.d# modprobe br_netfilter # 如果报上面的错误就执行该命令
root@pinode1:/etc/sysctl.d# sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
5、启动服务
systemctl start kubelet
systemctl enable kubelet
systemctl start containerd
systemctl enable containerd
三、初始化k8s控制平面
kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.26.1 --apiserver-advertise-address 192.168.31.85 --apiserver-bind-port 6443 --pod-network-cidr 172.16.0.0/16
[init] Using Kubernetes version: v1.26.1
[preflight] Running pre-flight checks
[WARNING SystemVerification]: missing optional cgroups: hugetlb
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 192.168.31.85]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [192.168.31.85 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [192.168.31.85 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 23.504546 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: joxngp.as9ns2ieyl257okk
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.31.85:6443 --token joxngp.as9ns2ieyl257okk \
--discovery-token-ca-cert-hash sha256:08381f2456b2a2a32bbdc93c932f87dd642e1d693509c5a0df1a9a141064da6a
在初始化的过程中如果遇到类似这种错误“
error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: nodes "master" not found
To see the stack trace of this error execute with --v=5 or higher”,大概率是kubelet环境不干净导致的,需要重新reset,或者需要在旧环境重新初始化新的环境时,不要去删除/etc/kubernetes/下的文件来处理这种问题。而是使用kubeadm reset命令来清除旧的文件,它会将相关文件夹的旧文件全部清除。再执行kubelet的重启命令“systemctl restart kubelet”。然后就可以重新执行“kubeadm init ”初始化命令了。
四、部署calico网络
calico部署官网地址:https://docs.tigera.io/calico/3.25/getting-started/kubernetes/self-managed-onprem/onpremises
1、下载calico.yaml文件
curl https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico.yaml -O
2、修改calico.yaml

3、应用calico.yaml文件
root@master:~ # kubectl apply -f calico.yaml

五、向集群添加node节点
root@pinode1:~# kubeadm join 192.168.31.85:6443 --token joxngp.as9ns2ieyl257okk --discovery-token-ca-cert-hash sha256:08381f2456b2a2a32bbdc93c932f87dd642e1d693509c5a0df1a9a141064da6a
[preflight] Running pre-flight checks
[WARNING SystemVerification]: missing optional cgroups: hugetlb
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

六、查看集群节点信息
