在上一篇文章中,我们介绍了K8S集群的整体架构和各角色节点上的组成组件及其作用,在这篇文章中,我们来实际搭建一个K8S集群。
在K8S集群中,各个组件之间通信是通过http/https协议来完成的,而这些都需要我们提供证书,因为在集群中,我们对证书的签发机构并没有什么要求,所以这些用到的证书可以由我们自己来签发,然后我们需要将这些证书配置到各个组件中。这其中是非常复杂的,所以,靠手动来安装各个组件是一件耗时费力的事情,为此,K8S官方提供了一个很方便的安装部署工具,这个工具就是kubeadm,这个工具可以很方便的为我们部署一个K8S集群。但是,使用这个方式部署的K8S,会将除了kubelet组件之外的其他组件都变成容器化,托管到K8S中。而且,这样的方式部署的K8S集群会自动为我们创建各个证书,但是证书有效期默认是一年,证书到期后如果没有更新,将会直接导致K8S集群的瘫痪,这是很麻烦的事情,而且,如果想要解决这个问题,就需要我们更新证书,或者重新编译kubeadm的源码,来更改证书有效期。这在生产环境中,是很难受的事情,一个不小心就得背锅。
所以,基于上述考虑,作为一名运维工程师,这是需要极力避免的事情。为了更好的了解K8S集群,我们最好采用二进制安装的方式,手动的部署各个组件。通过部署的过程,将会对K8S集群有一个更加深入的了解,这样之后在集群出现故障的时候,我们也更容易定位故障点。基于此,本文将采用二进制安装部署的方式,一点一点的来手动部署K8S集群,并且我们要手动的来字签证书。
在之前的文章中,我们只介绍了K8S相关的组件,在生产环境中,仅有K8S还是不够的,还需要有别的必要的软件,比如还需要有私有镜像仓库,用来存储我们的镜像文件。开源的私有仓库有很多,本文采用当前比较流行的Harbor作为私有镜像仓库。
此外,我们在上一篇文章中提到,Pod控制器会自动的为我们去管理pod,由此导致pod的地址可能会发生改变,所以我们引入了service资源这个概念,由kube-proxy组件为我们建立service和pod之间的联系,用户可以直接访问service的名称来访问后端的pod,这个功能其实就是DNS这个过程,所以,我们在K8S集群中还需要一个coredns组件,这个组件为我们提供了解析server名称到pod地址的服务。
一、实验环境要求
本次实验共用到5台服务器,所有服务器都加入host.com这个测试域,此外,还使用到od.com域,此域用于之后镜像仓库相关域名解析,架构图如下:
我们将Master节点和Node节点部署在两台物理服务器上,这样既可以满足Master的冗余,也可以部署多个Node节点,具体的系统要求及配置要求如下:
角色 | IP | 系统 | 配置 | Docker环境 |
---|---|---|---|---|
Proxy-1 | 10.4.7.11 | Centos 7.7 64位 | 1vCPU 1G内存 | No |
Proxy-2 | 10.4.7.12 | Centos 7.7 64位 | 1vCPU 1G内存 | No |
K8S-1 | 10.4.7.21 | Centos 7.7 64位 | 1vCPU 2G内存 | Yes |
K8S-2 | 10.4.7.22 | Centos 7.7 64位 | 1vCPU 2G内存 | Yes |
工具服 | 10.4.7.200 | Centos 7.7 64位 | 1vCPU 2G内存 | Yes |
所有的服务器上要统一关闭selinux及firewalld。所有服务器上要可以连接外网,且所有服务器上要安装如下软件包:
$ yum install epel-release $ yum install wget net-tools telnet tree nmap sysstat lrzsz dos2unix bind-utils -y
二、部署K8S集群依赖服务
1、部署DNS服务
DNS服务主要用来提供对集群内部域名的解析,比如我们在集群内部访问我们的镜像仓库的时候,就需要解析镜像仓库地址到内网IP上,这就需要我们使用一个内部DNS。如果在生产环境中,我们可以将这些域名解析到我们的域名服务商。根据我们的架构图,我们要在10.4.7.11这台服务器上安装我们的DNS服务:
# yum -y install bind # cat /etc/named.conf options { listen-on port 53 { 10.4.7.11; }; # 监听地址改为本机IP,也可以设置为0.0.0.0 directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; recursing-file "/var/named/data/named.recursing"; secroots-file "/var/named/data/named.secroots"; allow-query { any; }; # 允许所有来源地址查询 forwarders { 10.4.7.2; }; # 上级DNS,此处设置为网关地址 recursion yes; dnssec-enable no; # 禁用DNSSEC dnssec-validation no; # 禁用DNSSEC bindkeys-file "/etc/named.root.key"; managed-keys-directory "/var/named/dynamic"; pid-file "/run/named/named.pid"; session-keyfile "/run/named/session.key"; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; zone "." IN { type hint; file "named.ca"; }; include "/etc/named.rfc1912.zones"; include "/etc/named.root.key";
修改完成后,添加域:
# cat /etc/named.rfc1912.zones ... zone "host.com" IN { type master; file "host.com.zone"; allow-update { 10.4.7.11; }; }; zone "od.com" IN { type master; file "od.com.zone"; allow-update { 10.4.7.11; }; };
添加解析记录:
# cat /var/named/host.com.zone $ORIGIN host.com. $TTL 600 ; 10 minutes @ IN SOA dns.host.com. dnsadmin.host.com. ( 2019121501 ; serial 10800 ; refresh (3 hours) 900 ; retry (15 minutes) 604800 ; expire (1 week) 86400 ; minimum (1 day) ) NS dns.host.com. $TTL 60 ; 1 minute dns A 10.4.7.11 K8S7-11 A 10.4.7.11 K8S7-12 A 10.4.7.12 K8S7-21 A 10.4.7.21 K8S7-22 A 10.4.7.22 K8S7-200 A 10.4.7.200 # cat /var/named/od.com.zone $ORIGIN od.com. $TTL 600 ; 10 minutes @ IN SOA dns.od.com. dnsadmin.od.com. ( 2019121501 ; serial 10800 ; refresh (3 hours) 900 ; retry (15 minutes) 604800 ; expire (1 week) 86400 ; minimum (1 day) ) NS dns.od.com. $TTL 60 ; 1 minute dns A 10.4.7.11
添加完成后,检测DNS配置及启动DNS服务:
# named-checkconf # systemctl start named # systemctl enable named
搭建完成后,所有服务器上DNS地址指向10.4.7.11:
# cat /etc/resolv.conf search host.com nameserver 10.4.7.11 [root@K8S7-11 ~]# nslookup k8s7-12.host.com Server: 10.4.7.11 Address: 10.4.7.11#53 Name: K8S7-12.host.com Address: 10.4.7.12
至此,DNS服务部署完成。
2、部署Harbor私有镜像仓库及创建CA根证书
2.1 制作CA证书
根据架构图,我们需要在10.4.7.200服务器上来制作并签发证书,首先需要下载签发证书的工具并为这些工具添加执行权限:
# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/bin/cfssl # wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/bin/cfssl-json # wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -O /usr/bin/cfssl-certinfo # chmod +x /usr/bin/cfssl*
创建证书目录及CA证书配置文件:
# mkdir /opt/certs # cat ca-csr.json { "CN": "Self signed CA", # common name,浏览器使用该字段验证网站是否合法,一般填写域名,kube-apiserver 从证书中提取该字段作为请求的用户名 "hosts": [ ], "key": { "algo": "rsa", # 指定了加密算法,一般使用rsa "size": 2048 }, "names": [ { "C": "CN", # Country,国家,CN表示中国 "ST": "beijing", # State,州,省名称 "L": "beijing", # Locality,地区,城市 "O": "od", # Organization,组织名称,公司名称,kube-apiserver 从证书中提取该字段作为请求用户所属的组 "OU": "ops" # Organization Unit,组织单位名称,公司部门名称 } ], "ca": { "expiry": "175200h" # 证书过期时间,此处设置为175200小时后过期,相当于20年后过期 } }
在/opt/certs/目录下签发根证书:
# cfssl gencert -initca ca-csr.json | cfssl-json -bare ca 2019/12/15 18:14:22 [INFO] generating a new CA key and certificate from CSR 2019/12/15 18:14:22 [INFO] generate received request 2019/12/15 18:14:22 [INFO] received CSR 2019/12/15 18:14:22 [INFO] generating key: rsa-2048 2019/12/15 18:14:22 [INFO] encoded CSR 2019/12/15 18:14:22 [INFO] signed certificate with serial number 22910580121075577296507048955953452575428349464 # ls ca.csr ca-csr.json ca-key.pem ca.pem
CA证书是我们的根证书,此后将使用此处生成的CA证书来签发其他的证书。
2.2 部署Harbor服务
在10.4.7.200服务器上部署Harbor,用于我们的私有镜像仓库。
安装Docker环境:
# curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun
准备docker配置文件并创建相关目录:
# mkdir /etc/docker # cd /etc/docker/ # vim daemon.json # cat daemon.json { "graph": "/data/docker", "storage-driver": "overlay2", "insecure-registries": ["registry.access.redhat.com","quay.io","harbor.od.com"], "registry-mirrors": ["https://q2gr04ke.mirror.aliyuncs.com"], "bip": "172.16.200.1/24", "exec-opts": ["native.cgroupdriver=systemd"], "live-restore": true } # mkdir -p /data/docker
启动docker服务:
# systemctl start docker # systemctl enable docker Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service. # docker info Client: Debug Mode: false Server: Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 0 ...
2.3 部署Harbor服务
下载habor软件包:
# mkdir /opt/src # cd /opt/src/ # wget https://storage.googleapis.com/harbor-releases/release-1.8.0/harbor-offline-installer-v1.8.3.tgz
解压软件包到/opt/目录,创建软连接指向该软件包,这么做的好处是可以方便软件升级:
# tar -zxf harbor-offline-installer-v1.8.3.tgz -C /opt/ # cd .. # ls certs containerd harbor rh src # mv harbor harbor_v1.8.3 # ln -s /opt/harbor_v1.8.3/ /opt/harbor
编辑habor的配置文件:/opt/harbor/harbor.yml,修改如下配置并创建相关目录:
# egrep -v '^#|^$|^ #' harbor.yml hostname: harbor.od.com # 此处修改为harbor地址 http: port: 180 # harbor端口,默认80,此处改为180端口,80端口使用nginx做转发 harbor_admin_password: Harbor12345 database: password: root123 data_volume: /data/harbor # habor存储卷位置 clair: updaters_interval: 12 http_proxy: https_proxy: no_proxy: 127.0.0.1,localhost,core,registry jobservice: max_job_workers: 10 chart: absolute_url: disabled log: level: info rotate_count: 50 rotate_size: 200M location: /data/harbor/logs # harbor日志目录 _version: 1.8.0 # mkdir -p /data/harbor/logs
安装docker-compose:
# yum -y install docker-compose
运行habor安装脚本/opt/harbor/install.sh,运行成功后,执行docker-compose ps命令查看结果:
# pwd /opt/harbor # ./install.sh ... # docker-compose ps Name Command State Ports -------------------------------------------------------------------------------------- harbor-core /harbor/start.sh Up harbor-db /entrypoint.sh postgres Up 5432/tcp harbor-jobservice /harbor/start.sh Up harbor-log /bin/sh -c /usr/local/bin/ ... Up 127.0.0.1:1514->10514/tcp harbor-portal nginx -g daemon off; Up 80/tcp nginx nginx -g daemon off; Up 0.0.0.0:180->80/tcp redis docker-entrypoint.sh redis ... Up 6379/tcp registry /entrypoint.sh /etc/regist ... Up 5000/tcp registryctl /harbor/start.sh Up
安装nginx,做habor的http服务代理:
# yum -y install nginx # vim /etc/nginx/conf.d/habor.od.com.conf # cat /etc/nginx/conf.d/habor.od.com.conf server { listen 80; server_name harbor.od.com; client_max_body_size 1000m; location / { proxy_pass http://127.0.0.1:180; } } # nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful # systemctl start nginx # systemctl enable nginx Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service. # netstat -tnlup | grep nginx tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 111260/nginx: maste tcp6 0 0 :::80 :::* LISTEN 111260/nginx: maste
在10.4.7.11自建DNS上添加域名解析,harbor.od.com ==> 10.4.7.200:
# vim /var/named/od.com.zone # cat /var/named/od.com.zone $ORIGIN od.com. $TTL 600 ; 10 minutes @ IN SOA dns.od.com. dnsadmin.od.com. ( 2019121502 ; serial 10800 ; refresh (3 hours) 900 ; retry (15 minutes) 604800 ; expire (1 week) 86400 ; minimum (1 day) ) NS dns.od.com. $TTL 60 ; 1 minute dns A 10.4.7.11 harbor A 10.4.7.200 # named-checkconf # systemctl restart named
在宿主机上修改dns地址到10.4.7.11或者修改hosts记录,然后访问 http://10.4.7.11,可以看到如下页面,harbor默认用户名是admin,默认密码是Harbor12345,登录进入系统:
harbor登录成功界面, 当我们成功打开上述页面后,说明我们的harbor已经部署成功。
3、部署ETCD服务
在10.4.7.12、10.4.7.21、10.4.7.22上部署ETCD集群,用于K8S存储数据,部署在三台服务器上可以保证高可用,需要注意的是,ETCD服务需要部署奇数个节点,不能部署偶数个节点,否则会导致无法启动服务。
3.1 签发证书
在我们的架构中,etcd集群部署了3个节点,这三个节点之间要保持数据一致,所以需要相互之间都能访问,也就是说某一个节点,既可能是其他节点的server端,也可能是其他节点的client端,所以,在ETCD集群中,通信使用的证书既要用于server的认证,也要用于client认证。使用我们上面生成的自签CA证书来签发新的证书,首先,我们需要准备一个签发证书的策略配置文件,CA将根据这个配置文件来生成具体的证书:
# pwd /opt/certs # vim ca-config.json # cat ca-config.json { "signing": { "default": { "expiry": "175200h" }, "profiles": { # profiles指定了CA可以签署的证书类型 "server": { # 证书类型名,此处是一个server端证书 "expiry": "175200h", # 证书过期时间 "usages": [ "signing", # 表示可以使用该证书签发其他证书 "key encipherment", "server auth" # 表示client可以用该CA对server提供的证书进行验证 ] }, "client": { "expiry": "175200h", "usages": [ "signing", "key encipherment", "client auth" # 表示server端可以用该CA对client提供的证书进行验证 ] }, "peer": { "expiry": "175200h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } }
创建证书请求文件,使用这个文件向CA申请证书:
# cat etcd-peer-csr.json { "CN": "k8s-etcd", "hosts": [ # 证书的使用范围,只有这里指定的IP使用此证书才可以被认可 "10.4.7.11", "10.4.7.12", "10.4.7.21", "10.4.7.22" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "beijing", "L": "beijing", "O": "od", "OU": "ops" } ] }
签发证书,根据前述,我们需要签发一个技能验证client又能验证server的证书,所以,在签发证书的时候应该指定profile为peer,即ca-config.json文件中的第三个profile:
# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer etcd-peer-csr.json | cfssl-json -bare etcd-peer 2019/12/16 14:22:19 [INFO] generate received request 2019/12/16 14:22:19 [INFO] received CSR 2019/12/16 14:22:19 [INFO] generating key: rsa-2048 2019/12/16 14:22:20 [INFO] encoded CSR 2019/12/16 14:22:20 [INFO] signed certificate with serial number 428525440842853412781261514592315218900681388961 2019/12/16 14:22:20 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). # ll -h etcd-peer* -rw-r--r-- 1 root root 1.1K 12月 16 14:22 etcd-peer.csr -rw-r--r-- 1 root root 363 12月 16 14:17 etcd-peer-csr.json -rw------- 1 root root 1.7K 12月 16 14:22 etcd-peer-key.pem -rw-r--r-- 1 root root 1.5K 12月 16 14:22 etcd-peer.pem
至此,我们看到已经签发了etcd-peer这个证书。
3.2 在10.4.7.12上安装etcd
在我们的架构图中,有三个etcd节点,其安装部署过程都是一样的,只需要在不同节点上修改部分配置文件即可,我们先用 10.4.7.12 这个节点来作详细说明。
创建etcd用户:
# useradd -s /sbin/nologin -M etcd
下载etcd并解压:
# mkdir /opt/src # cd /opt/src/ # wget https://github.com/etcd-io/etcd/releases/download/v3.1.20/etcd-v3.1.20-linux-amd64.tar.gz # tar -zxf etcd-v3.1.20-linux-amd64.tar.gz -C /opt/ # cd /opt/ # mv etcd-v3.1.20-linux-amd64/ etcd-v3.1.20 # ln -s /opt/etcd-v3.1.20/ /opt/etcd
创建etcd相关的目录:
# mkdir -p /opt/etcd/certs /data/etcd /data/logs/etcd-server
从10.4.7.200拷贝证书到/opt/etcd/certs,我们需要拷贝3个证书,分别是:ca.pem、etcd-peer.pem、etcd-peer-key.pem,其中私钥文件etcd-peer-key.pem的权限是600:
# pwd /opt/etcd/certs # scp k8s7-200:/opt/certs/ca.pem ./ # scp k8s7-200:/opt/certs/etcd-peer.pem ./ # scp k8s7-200:/opt/certs/etcd-peer-key.pem ./ # ll -h 总用量 12K -rw-r--r-- 1 root root 1.4K 12月 16 15:04 ca.pem -rw------- 1 root root 1.7K 12月 16 15:05 etcd-peer-key.pem -rw-r--r-- 1 root root 1.5K 12月 16 15:04 etcd-peer.pem
创建ETCD启动脚本:
# pwd /opt/etcd # vim /opt/etcd/etcd-server-startup.sh # cat etcd-server-startup.sh #!/bin/sh ./etcd --name etcd-server-7-12 \ --data-dir /data/etcd/etcd-server \ --listen-peer-urls https://10.4.7.12:2380 \ --listen-client-urls https://10.4.7.12:2379,http://127.0.0.1:2379 \ --quota-backend-bytes 8000000000 \ --initial-advertise-peer-urls https://10.4.7.12:2380 \ --advertise-client-urls https://10.4.7.12:2379,http://127.0.0.1:2379 \ --initial-cluster etcd-server-7-12=https://10.4.7.12:2380,etcd-server-7-21=https://10.4.7.21:2380,etcd-server-7-22=https://10.4.7.22:2380 \ --ca-file ./certs/ca.pem \ --cert-file ./certs/etcd-peer.pem \ --key-file ./certs/etcd-peer-key.pem \ --client-cert-auth \ --trusted-ca-file ./certs/ca.pem \ --peer-ca-file ./certs/ca.pem \ --peer-cert-file ./certs/etcd-peer.pem \ --peer-key-file ./certs/etcd-peer-key.pem \ --peer-client-cert-auth \ --peer-trusted-ca-file ./certs/ca.pem \ --log-output stdout
修改etcd相关目录的属组和属主:
# chown -R etcd.etcd /opt/etcd-v3.1.20/ # chown -R etcd.etcd /data/etcd/ # chown -R etcd.etcd /data/logs/etcd-server/
至此,实际上我们已经可以启动etcd服务了,但是如果我们的启动脚本挂了,etcd服务也会一并挂掉,所以我们将进程托管到supervisor服务中,以此来帮我们自动拉起进程。
安装supervisor并启动:
# yum -y install supervisor # systemctl start supervisord # systemctl enable supervisord
创建supervisor的托管etcd服务的配置文件:
# cat /etc/supervisord.d/etcd-server.ini [program:etcd-server-7-12] command=/opt/etcd/etcd-server-startup.sh ; the program (relative uses PATH, can take args) numprocs=1 ; number of processes copies to start (def 1) directory=/opt/etcd ; directory to cwd to before exec (def no cwd) autostart=true ; start at supervisord start (default: true) autorestart=true ; retstart at unexpected quit (default: true) startsecs=30 ; number of secs prog must stay running (def. 1) startretries=3 ; max # of serial start failures (default 3) exitcodes=0,2 ; 'expected' exit codes for process (default 0,2) stopsignal=QUIT ; signal used to kill process (default TERM) stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10) user=etcd ; setuid to this UNIX account to run the program redirect_stderr=true ; redirect proc stderr to stdout (default false) stdout_logfile=/data/logs/etcd-server/etcd.stdout.log ; stdout log path, NONE for none; default AUTO stdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB) stdout_logfile_backups=4 ; # of stdout logfile backups (default 10) stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0) stdout_events_enabled=false ; emit events on stdout writes (default false)
创建supervisor托管,并查看状态,启动supervisor任务后需要等待30秒,当状态为RUNNING时则表示进程正常:
# supervisorctl update etcd-server-7-12: added process group # supervisorctl status etcd-server-7-12 RUNNING pid 40471, uptime 0:01:05
至此,10.4.7.12上的ETCD就部署完成了,接下来,我们需要按照相同方式部署10.4.7.21、10.4.7.22两台服务器,但是需要注意修改etcd启动脚本及supervisor配置文件, 以10.4.7.21为例:
# cat /opt/etcd/etcd-server-startup.sh #!/bin/sh ./etcd --name etcd-server-7-21 \ # 这里要修改 --data-dir /data/etcd/etcd-server \ --listen-peer-urls https://10.4.7.21:2380 \ # 这里要修改 --listen-client-urls https://10.4.7.21:2379,http://127.0.0.1:2379 \ # 这里要修改 --quota-backend-bytes 8000000000 \ --initial-advertise-peer-urls https://10.4.7.21:2380 \ # 这里要修改 --advertise-client-urls https://10.4.7.21:2379,http://127.0.0.1:2379 \ # 这里要修改 --initial-cluster etcd-server-7-12=https://10.4.7.12:2380,etcd-server-7-21=https://10.4.7.21:2380,etcd-server-7-22=https://10.4.7.22:2380 \ --ca-file ./certs/ca.pem \ --cert-file ./certs/etcd-peer.pem \ --key-file ./certs/etcd-peer-key.pem \ --client-cert-auth \ --trusted-ca-file ./certs/ca.pem \ --peer-ca-file ./certs/ca.pem \ --peer-cert-file ./certs/etcd-peer.pem \ --peer-key-file ./certs/etcd-peer-key.pem \ --peer-client-cert-auth \ --peer-trusted-ca-file ./certs/ca.pem \ --log-output stdout # cat /etc/supervisord.d/etcd-server.ini [program:etcd-server-7-21] #这行要修改 ...
当三台ETCD节点都运行正常后,我们来检查一下etcd集群的健康状态:
# ./etcdctl cluster-health member 988139385f78284 is healthy: got healthy result from http://127.0.0.1:2379 member 5a0ef2a004fc4349 is healthy: got healthy result from http://127.0.0.1:2379 member f4a0cb0a765574a8 is healthy: got healthy result from http://127.0.0.1:2379 cluster is healthy # ./etcdctl member list 988139385f78284: name=etcd-server-7-22 peerURLs=https://10.4.7.22:2380 clientURLs=http://127.0.0.1:2379,https://10.4.7.22:2379 isLeader=false 5a0ef2a004fc4349: name=etcd-server-7-21 peerURLs=https://10.4.7.21:2380 clientURLs=http://127.0.0.1:2379,https://10.4.7.21:2379 isLeader=false f4a0cb0a765574a8: name=etcd-server-7-12 peerURLs=https://10.4.7.12:2380 clientURLs=http://127.0.0.1:2379,https://10.4.7.12:2379 isLeader=true
此时我们可以看到,三台etcd节点都已经正常工作,其中当前的etcd leader为 10.4.7.12 。至此,我们的ETCD集群就搭建完毕,接下来,我们将开始正式搭建K8S相关组件, 其他服务的安装,请见下回分解。