返回首页
当前位置: 主页 > 精通Office > Ubuntu教程 >

如何使用kubeadm部署高可用k8s 1.9.3环境

时间:2018-02-25 22:54来源:2018年最新注册送彩金www.zhixing123.cn 编辑:麦田守望者

整体规划

机器规划

name IP roles docker网络
k8snode01 10.90.11.1 etcd k8s-master k8s-node keepalived 172.18.1.0/24
k8snode02 10.90.11.2 etcd k8s-master k8s-node keepalived 172.18.2.0/24
k8snode03 10.90.11.3 etcd k8s-master k8s-node keepalived 172.18.3.0/24

以上三台机器部署3个高可用状态的master,因为是测试环境,也同时当做node使用
3个master使用一个虚IP:10.90.11.220

核心架构

如架构图
k8s 高可用2个核心 apiserver master和etcd

  • apiserver master:(需高可用)集群核心,集群API接口、集群各个组件通信的中枢;集群安全控制;
  • etcd:(需高可用)集群的数据中心,用于存放集群的配置以及状态信息,非常重要,如果数据丢失那么集群将无法恢复;因此高可用集群部署首先就是etcd是高可用集群;
  • kube-scheduler:调度器 (内部自选举)集群Pod的调度中心;默认kubeadm安装情况下–leader-elect参数已经设置为true,保证master集群中只有一个kube-scheduler处于活跃状态;
  • kube-controller-manager: 控制器 (内部自选举)集群状态管理器,当集群状态与期望不同时,kcm会努力让集群恢复期望状态,比如:当一个pod死掉,kcm会努力新建一个pod来恢复对应replicas set期望的状态;默认kubeadm安装情况下–leader-elect参数已经设置为true,保证master集群中只有一个kube-controller-manager处于活跃状态;
  • kubelet: agent node注册apiserver
  • kube-proxy: 每个node上一个,负责service vip到endpoint pod的流量转发,老版本主要通过设置iptables规则实现,新版1.9基于kube-proxy-lvs 实现

部署前准备 (每个node)

  • 主机名:各node 主机名必须不一样
  • 关闭swap:swapoff -a
  • 设置内核优化:
cat <<EOF >>  /etc/sysctl.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1

vm.swappiness = 0
vm.overcommit_memory=1
vm.panic_on_oom=0
kernel/panic=10
kernel/panic_on_oops=1
kernel.pid_max = 4194303
vm.max_map_count = 655350
fs.aio-max-nr = 524288
fs.file-max = 6590202

net.ipv4.neigh.default.gc_stale_time=120
net.ipv4.conf.lo.arp_announce=2
net.ipv4.tcp_max_tw_buckets = 5000
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 1024
net.ipv4.tcp_synack_retries = 2
net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.arp_announce=2
net.ipv4.conf.all.arp_announce=2
net.ipv4.ip_forward = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_fin_timeout = 30
EOF
sysctl -p /etc/sysctl.conf

配置keepalived (每个master)

  • VIP Master 通过控制VIP 来HA高可用(常规方案)
  • 三个master节点 相互独立运行,互补干扰. kube-apiserver作为核心入口, 可以使用keepalived 实现高可用, kubeadm join暂时不支持负载均衡的方式

安装

apt-get install -y keepalived

配置虚IP

cat >/etc/keepalived/keepalived.conf  <<EOF
global_defs {
  router_id LVS_k8s
}

vrrp_script CheckK8sMaster {
    #修改为本机IP
    script "curl -k https://10.90.11.1:6443"
    interval 3
    timeout 9
    fall 2
    rise 2
}

vrrp_instance VI_1 {
    state MASTER
    interface ens160
    virtual_router_id 61
    # 主节点权重最高 依次减少
    priority 100
    advert_int 1
    #修改为本地IP 
    mcast_src_ip 10.90.11.1
    nopreempt
    authentication {
        auth_type PASS
        auth_pass 666888
    }
    unicast_peer {
        #注释掉本地IP
        #10.90.11.1
        10.90.11.2
        10.90.11.3
    }
    virtual_ipaddress {
        10.90.11.220/24
    }
    track_script {
        CheckK8sMaster
    }

}
EOF

设置开机启动

systemctl enable keepalived && systemctl restart keepalived

ETCD高可用集群准备

准备证书

安装证书管理工具

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
chmod +x cfssl_linux-amd64
sudo mv cfssl_linux-amd64 /usr/bin/cfssl

wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x cfssljson_linux-amd64
sudo mv cfssljson_linux-amd64 /usr/bin/cfssljson

wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl-certinfo_linux-amd64
sudo mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

生成ETCD的TLS 秘钥和证书

  • 为了保证通信安全,客户端(如 etcdctl) 与 etcd 集群、etcd 集群之间的通信需要使用 TLS 加密,本节创建 etcd TLS 加密所需的证书和私钥。
  • 创建 CA 配置文件:
mkdir ssl && cd ssl
cfssl print-defaults csr > csr.json
cat > config.json <<EOF
{
"signing": {
    "default": {
      "expiry": "8760h"
      },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "8760h"
      }
    }
}
}
EOF
  • ca-config.json:可以定义多个 profiles,分别指定不同的过期时间、使用场景等参数;后续在签名证书时使用某个 profile;
  • signing:表示该证书可用于签名其它证书;生成的 ca.pem 证书中 CA=TRUE;
  • server auth:表示 client 可以用该 CA 对 server 提供的证书进行验证;
  • client auth:表示 server 可以用该 CA 对 client 提供的证书进行验证;

创建ca证书请求

cat > ca-csr.json <<EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF
  • “CN”:Common Name,kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name);浏览器使用该字段验证网站是否合法;
  • “O”:Organization,kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group);

创建CA证书和私钥

cfssl gencert -initca ca-csr.json | cfssljson -bare ca

创建 etcd 证书签名请求

cat > etcd-csr.json <<EOF
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "10.90.11.1",
    "10.90.11.2",
    "10.90.11.3"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF
  • hosts 字段指定授权使用该证书的 etcd 节点 IP;
  • 每个节点IP 都要在里面 或者 每个机器申请一个对应IP的证书

生成 etcd 证书和私钥

cfssl gencert -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=config.json \
  -profile=kubernetes etcd-csr.json | cfssljson -bare etcd

将证书拷贝到所有服务器的指定目录

mkdir -p /etc/etcd/ssl
cp etcd.pem etcd-key.pem  ca.pem /etc/etcd/ssl/

同样需将上述证书全部拷贝到所有etcd node

安装etcd

部署环境变量

export NODE_NAME="etcd-host1" #当前部署的机器名称(随便定义,只要能区分不同机器即可)
export NODE_IP="10.90.11.1" # 当前部署的机器 IP
export NODE_IPS="10.90.11.1 10.90.11.2 10.90.11.3" # etcd 集群所有机器 IP
# etcd 集群间通信的IP和端口
export ETCD_NODES="etcd-host1=https://10.90.11.1:2380,etcd-host2=https://10.90.11.2:2380,etcd-host3=https://10.90.11.3:2380"

准备二进制文件

wget https://github.com/coreos/etcd/releases/download/v3.2.9/etcd-v3.2.9-linux-amd64.tar.gz
tar xf etcd-v3.2.9-linux-amd64.tar.gz 
cp etcd-v3.2.9-linux-amd64/etcd* /usr/bin/
chmod +x /usr/bin/etcd*

创建 etcd 的 systemd unit 文件

mkdir -p /var/lib/etcd  # 必须先创建工作目录
cat > etcd.service <<EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/bin/etcd \\
  --name=${NODE_NAME} \\
  --cert-file=/etc/etcd/ssl/etcd.pem \\
  --key-file=/etc/etcd/ssl/etcd-key.pem \\
  --peer-cert-file=/etc/etcd/ssl/etcd.pem \\
  --peer-key-file=/etc/etcd/ssl/etcd-key.pem \\
  --trusted-ca-file=/etc/etcd/ssl/ca.pem \\
  --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \\
  --initial-advertise-peer-urls=https://${NODE_IP}:2380 \\
  --listen-peer-urls=https://${NODE_IP}:2380 \\
  --listen-client-urls=https://${NODE_IP}:2379,http://127.0.0.1:2379 \\
  --advertise-client-urls=https://${NODE_IP}:2379 \\
  --initial-cluster-token=etcd-cluster-0 \\
  --initial-cluster=${ETCD_NODES} \\
  --initial-cluster-state=new \\
  --data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
  • 指定 etcd 的工作目录和数据目录为 /var/lib/etcd,需在启动服务前创建这个目录;
  • 为了保证通信安全,需要指定 etcd 的公私钥(cert-file和key-file)、Peers 通信的公私钥和 CA 证书(peer-cert-file、peer-key-file、peer-trusted-ca-file)、客户端的CA证书(trusted-ca-file);
  • –initial-cluster-state 值为 new 时,–name 的参数值必须位于 –initial-cluster 列表中;

启动 etcd 服务

mv etcd.service /etc/systemd/system/
systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
systemctl status etcd

验证服务并配置etcdctl工具

etcdctl \
  --endpoints=https://${NODE_IP}:2379  \
  --ca-file=/etc/etcd/ssl/ca.pem \
  --cert-file=/etc/etcd/ssl/etcd.pem \
  --key-file=/etc/etcd/ssl/etcd-key.pem \
  cluster-health

为了方便以后使用etcdctl工具,我们可以创建一个别名:

cat > /etc/profile.d/etcd.sh <<EOF
> alias etcdctl='etcdctl --ca-file=/etc/etcd/ssl/ca.pem --cert-file=/etc/etcd/ssl/etcd.pem --key-file=/etc/etcd/ssl/etcd-key.pem'
> EOF
#以后登录自动加载该别名
. /etc/profile.d/etcd.sh #本次手动引入
#验证
etcdctl member list

显示结果如下面这样表示服务正常:

[email protected]:~# etcdctl cluster-health
member 22c5ca106aec5dea is healthy: got healthy result from https://10.90.11.1:2379
member 5c306cc6289025b3 is healthy: got healthy result from https://10.90.11.3:2379
member bfdf492621696447 is healthy: got healthy result from https://10.90.11.2:2379
cluster is healthy
[email protected]:~# etcdctl member list
22c5ca106aec5dea: name=etcd-host1 peerURLs=https://10.90.11.1:2380 clientURLs=https://10.90.11.1:2379 isLeader=true
5c306cc6289025b3: name=etcd-host3 peerURLs=https://10.90.11.3:2380 clientURLs=https://10.90.11.3:2379 isLeader=false
bfdf492621696447: name=etcd-host2 peerURLs=https://10.90.11.2:2380 clientURLs=https://10.90.11.2:2379 isLeader=false

安装k8smaster

安装和配置docker

安装docker

apt-get install docker.io

配置docker

cat > /etc/default/docker <<EOF
> DOCKER_OPTS="--insecure-registry registry.xxx.com:5000 -H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock --bip=172.18.1.1/24 --mtu=1450"
> EOF
  • --insecure-registry registry.kokoerp.com:5000 为指定私有仓库地址

重启生效

systemctl enable docker
systemctl restart docker

配置防火墙

docker1.13以后将iptables的FORWARD默认设置为drop,因此要配置:

iptables -P FORWARD ACCEPT

并将该命令加入rc.local的exit之前

安装kubeadm

准备apt仓库(如不能翻墙,则需导入离线包)

echo 'deb http://apt.kubernetes.io/ kubernetes-xenial main' >/etc/apt/sources.list.d/kubernetes.list
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 3746C208A7317B0F
apt-get update

安装kubeadm

apt-get install kubeadm -y

安装此工具将自动安装cni

配置master

创建配置文件

cd /etc/kubernetes/
cat > config.yaml  <<EOF
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
etcd:
  endpoints:
  - https://10.90.11.1:2379
  - https://10.90.11.2:2379
  - https://10.90.11.3:2379
  caFile: /etc/etcd/ssl/ca.pem 
  certFile: /etc/etcd/ssl/etcd.pem 
  keyFile: /etc/etcd/ssl/etcd-key.pem
  dataDir: /var/lib/etcd
networking:
  podSubnet: 172.28.0.0/16
kubernetesVersion: 1.9.3
api:
  advertiseAddress: "10.90.11.220"
token: "b99a00.a144ef80536d4324"
tokenTTL: "0s"
apiServerCertSANs:
- etcd-host1
- etcd-host2
- etcd-host3
- 10.90.11.1
- 10.90.11.2
- 10.90.11.3
- 10.90.11.220
featureGates:
  CoreDNS: true
EOF

初始化第一个master

kubeadm init --config config.yaml 

正常的结果应该如下:

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token b99a00.a144ef80536d4324 10.90.11.220:6443 --discovery-token-ca-cert-hash sha256:2dbc03d97deb18a1850e87a354344bf9aac5290f7d648f3885febcb2bb19e535
  • 最后一行我们需记录下来,以后有node加入时执行该命令。
    按提示执行:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

验证结果:

[email protected]:/etc/kubernetes# kubectl get nodes
NAME        STATUS    ROLES    AGE      VERSION
k8snode01  NotReady  master    2m        v1.9.3
  • 可以看到已经存在一个节点,但没有准备就绪,因为网络还未配置
  • 此次keepalived也处于正常状态了,需IP也配置到了当前节点

配置cni网络:kube-route

wget https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/kubeadm-kuberouter.yaml
kubectl apply -f kubeadm-kuberouter.yaml

kube-router是基于IPVS/LVS的Kubernetes网络设计的一个集负载均衡器、防火墙和容器网络的综合方案。
再次查看node状态,可以看到集群已经准备就绪:

[email protected]:~# kubectl get nodes
NAME        STATUS    ROLES    AGE      VERSION
k8snode01  Ready    master    2h        v1.9.3
[email protected]:~# kubectl get cs
NAME                STATUS    MESSAGE              ERROR
scheduler            Healthy  ok                  
controller-manager  Healthy  ok                  
etcd-1              Healthy  {"health": "true"}  
etcd-0              Healthy  {"health": "true"}  
etcd-2              Healthy  {"health": "true"}  
[email protected]:~# kubectl get po --all-namespaces
NAMESPACE    NAME                                READY    STATUS    RESTARTS  AGE
kube-system  coredns-65dcdb4cf-86x5r            1/1      Running  0          2h
kube-system  kube-apiserver-k8snode01            1/1      Running  0          2h
kube-system  kube-controller-manager-k8snode01  1/1      Running  0          2h
kube-system  kube-proxy-brfnw                    1/1      Running  0          2h
kube-system  kube-router-kcfw9                  1/1      Running  0          2m
kube-system  kube-scheduler-k8snode01            1/1      Running  0          2h

部署其他master节点

拷贝第一个master节点的/etc/kubernetes/pki和config.yaml 到其他master

scp -r /etc/kubernetes/pki [email protected]:/etc/kubernetes/
scp -r /etc/kubernetes/config.yaml [email protected]:/etc/kubernetes/
scp -r /etc/kubernetes/pki [email protected]:/etc/kubernetes/
scp -r /etc/kubernetes/config.yaml [email protected]:/etc/kubernetes/

在其他master节点执行初始化命令

kubeadm init --config /etc/kubernetes/config.yaml 

稍等一下即可看到三个master都准备就绪了

[email protected]:~# kubectl get nodes
NAME        STATUS    ROLES    AGE      VERSION
k8snode01  Ready    master    2h        v1.9.3
k8snode02  Ready    master    2m        v1.9.3
k8snode03  Ready    master    1m        v1.9.3

为了测试我们把master 设置为 可部署role

默认情况下,为了保证master的安全,master是不会被调度到app的。你可以取消这个限制通过输入:

kubectl taint nodes --all node-role.kubernetes.io/master-
顶一下
(0)
0%
踩一下
(0)
0%
标签(Tag):kubeadm
------分隔线----------------------------
------分隔线----------------------------
发表评论
请自觉遵守互联网相关的政策法规,严禁发布色情、暴力、反动的言论。
评价:
表情:
验证码:点击我更换图片
猜你感兴趣
博聚网