主機環境:
vi /etc/hosts
192.168.60.153 c7-k8s01 c7-k8s01.tw.company #master & docker
192.168.60.154 c7-k8s02 c7-k8s02.tw.company
192.168.60.155 c7-k8s03 c7-k8s03.tw.company
測試時,我是在 DNS Server 上加上正、反解且由 DHCP Server 配發 IP,所以還在 ifcfg-eth0 裡加入
DOMAIN=tw.company
SEARCH=tw.company
這樣就不用在 hosts 檔案加主機 IP 對應
master 主機執行的步驟:
建立一個暫存 key 檔案資料夾
mkdir kube-key
cd kube-key
1. 建立 kubernetes cluster root CA 憑證:
openssl genrsa -out ca-key.pem 2048
openssl req -x509 -new -nodes -key ca-key.pem -days 10000 -out ca.pem -subj "/CN=kube-ca"
2. 新增 OpenSSL 設定檔 openssl.conf:
vi openssl.cnf
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = c7-k8s01.tw.company
IP.1 = 192.168.60.153
3. 產生 apiserver key & apiserver csr 憑證:
openssl genrsa -out apiserver-key.pem 2048
openssl req -new -key apiserver-key.pem -out apiserver.csr -subj "/CN=kube-apiserver" -config openssl.cnf
4. 依據 root ca key,crt 和 server.csr 產生 server.crt:
openssl x509 -req -in apiserver.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out apiserver.pem -days 365 -extensions v3_req -extfile openssl.cnf
5. 在master主機上先幫所有的 node 建立一個 OpenSSL 設定檔(openssl.conf)並產生 key & csr 憑證:
c7-k8s01 openssl.conf(c7-k8s01 是 master&docker service 在同一台,所以也必須執行下列動作):
[root@c7-k8s01 kube-key]# mkdir c7-k8s01 c7-k8s02 c7-k8s03
[root@c7-k8s01 kube-key]# vi c7-k8s01/openssl-c7-k8s01.cnf
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = c7-k8s01.tw.company
IP.1 = 192.168.60.153
c7-k8s01 key & csr:
[root@c7-k8s01 kube-key]# openssl genrsa -out c7-k8s01/c7-k8s01-key.pem 2048
[root@c7-k8s01 kube-key]# openssl req -new -key c7-k8s01/c7-k8s01-key.pem -out c7-k8s01/c7-k8s01.csr -subj "/CN=c7-k8s01.tw.company" -config c7-k8s01/openssl-c7-k8s01.cnf
[root@c7-k8s01 kube-key]# ls c7-k8s01
c7-k8s01.csr c7-k8s01-key.pem openssl-c7-k8s01.cnf
產生 c7-k8s01 root ca crt 憑證:
[root@c7-k8s01 kube-key]# openssl x509 -req -in c7-k8s01/c7-k8s01.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out c7-k8s01/c7-k8s01.pem -days 365 -extensions v3_req -extfile c7-k8s01/openssl-c7-k8s01.cnf
[root@c7-k8s01 kube-key]# ls c7-k8s01
c7-k8s01.csr c7-k8s01-key.pem c7-k8s01.pem openssl-c7-k8s01.cnf
(重覆<步驟 5>為其他 node 產生憑證檔案)
6. 到這個步驟在 kubernetes cluster master 上應該會有如下列的 SSL 憑證檔案:
[root@c7-k8s01 kube-key]# ls -lR
-rw-r--r--. 1 root root 17 Nov 15 22:39 ca.srl
./c7-k8s01:
-rw-r--r--. 1 root root 261
Nov 15 22:32 openssl-c7-k8s01.cnf
./c7-k8s02:
-rw-r--r--. 1 root root 261
Nov 15 22:32 openssl-c7-k8s02.cnf
./c7-k8s03:
-rw-r--r--. 1 root root 260
Nov 15 22:36 openssl-c7-k8s03.cnf
變更 master node 和其他 node 套用 pem 格式的 ssl 憑證:
Master Node (c7-k8s01):
7. 複製產生的 pem 格式的 ssl 檔案到 /etc/kubernetes/ssl 資料夾:
[root@c7-k8s01 kube-key]# mkdir -p /etc/kubernetes/ssl
[root@c7-k8s01 kube-key]# cd /etc/kubernetes/ssl
[root@c7-k8s01 ssl]# cp /root/kube-key/ca.pem .
[root@c7-k8s01 ssl]# cp /root/kube-key/apiserver.pem .
[root@c7-k8s01 ssl]# cp /root/kube-key/apiserver-key.pem .
[root@c7-k8s01 ssl]# ls -l
8. 新增一個 yaml 格式的 kubeconfig 配置檔:
[root@c7-k8s01 ~]# vi /etc/kubernetes/cm-kubeconfig.yaml
apiVersion: v1
kind: Config
clusters:
- name: local
cluster:
certificate-authority: /etc/kubernetes/ssl/ca.pem
users:
- name: controllermanager
user:
client-certificate: /etc/kubernetes/ssl/apiserver.pem
client-key: /etc/kubernetes/ssl/apiserver-key.pem
contexts:
- context:
cluster: local
user: controllermanager
name: kubelet-context
current-context: kubelet-context
9. 修改 master node 上的 controll-manager, scheduler and apiserver 服務設定:
(因為 controll-manager, scheduler and apiserver 在同一台且使用不安全的 8080 port)
[root@c7-k8s01 ssl]# vi /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
[root@c7-k8s01 ssl]# vi /etc/kubernetes/apiserver
KUBE_API_ADDRESS="--bind-address=0.0.0.0 --insecure-bind-address=0.0.0.0"
KUBE_API_PORT="--secure-port=6443 --insecure-port=8080"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
KUBE_API_ARGS="--client-ca-file=/etc/kubernetes/ssl/ca.pem --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem --service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem"
[root@c7-k8s01 ssl]# vi /etc/kubernetes/controller-manager
KUBE_CONTROLLER_MANAGER_ARGS="--service-account-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem --root-ca-file=/etc/kubernetes/ssl/ca.pem --master=
http://192.168.60.153:8080 --kubeconfig=/etc/kubernetes/cm-kubeconfig.yaml"
[root@c7-k8s01 ssl]# vi /etc/kubernetes/scheduler
重新啟動相關的 services:
for SERVICE in kube-apiserver kube-controller-manager kube-scheduler; do
systemctl restart $SERVICE
systemctl is-active $SERVICE
done
10. 觀察所有 services 狀態:
for SERVICE in kube-controller-manager kube-scheduler kube-apiserver; do
systemctl status $SERVICE -l
done
11. 確認 master node ssl 憑證是否正確配置:
以下步驟在其他 nodes 上執行(c7-k8s01(master&docker在同一台,所以也必須執行下列動作),c7-k8s02,c7-k8s03):
12. 複製產生的 pem 格式的 ssl 檔案到 /etc/kubernetes/ssl 資料夾:
master node 上有 docker service 在同一台,所以也必須執行下列動作:
[root@c7-k8s01 ~]# cd /etc/kubernetes/ssl
[root@c7-k8s01 ssl]# cp /root/kube-key/c7-k8s01/*.pem .
[root@c7-k8s01 ssl]# ln -s c7-k8s01.pem worker.pem
[root@c7-k8s01 ssl]# ln -s c7-k8s01-key.pem worker-key.pem
[root@c7-k8s01 ssl]# ls
apiserver-key.pem apiserver.pem c7-k8s01-key.pem c7-k8s01.pem ca.pem worker-key.pem worker.pem
slave node:
[root@c7-k8s02 ~]# mkdir -p /etc/kubernetes/ssl/
[root@c7-k8s02 ~]# cd /etc/kubernetes/ssl/
[root@c7-k8s02 ssl]# scp root@c7-k8s01:/root/kube-key/ca.pem .
root@c7-k8s01's password:
ca.pem 100% 1147 1.1KB/s
00:00
[root@c7-k8s02 ssl]# scp root@c7-k8s01:/root/kube-key/c7-k8s02/*.pem . #要注意變更對應的資料夾
root@c7-k8s01's password:
c7-k8s02-key.pem 100% 1679 1.6KB/s
00:00
c7-k8s02.pem 100% 1094 1.1KB/s
00:00
[root@c7-k8s02 ssl]# ls
c7-k8s02-key.pem c7-k8s02.pem ca.pem worker-key.pem worker.pem
13. 新增連結檔:
[root@c7-k8s02 ssl]# ln -s c7-k8s02.pem worker.pem
[root@c7-k8s02 ssl]# ln -s c7-k8s02-key.pem worker-key.pem
14. 新增一個 yaml 格式的 worker-kubeconfig 配置檔:
[root@c7-k8s02 ssl]# vi /etc/kubernetes/worker-kubeconfig.yaml
apiVersion: v1
kind: Config
clusters:
- name: local
cluster:
certificate-authority: /etc/kubernetes/ssl/ca.pem
users:
- name: kubelet
user:
client-certificate: /etc/kubernetes/ssl/worker.pem
client-key: /etc/kubernetes/ssl/worker-key.pem
contexts:
- context:
cluster: local
user: kubelet
name: kubelet-context
current-context: kubelet-context
15. 修改其他 node 上的 docker kubelet kube-proxy 服務設定:
[root@c7-k8s02 ssl]# vi /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
[root@c7-k8s02 ssl]# vi /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_HOSTNAME="--hostname-override=c7-k8s02"
KUBELET_ARGS="--cluster-dns=192.168.60.153 --cluster-domain=tw.company --tls-cert-file=/etc/kubernetes/ssl/worker.pem --tls-private-key-file=/etc/kubernetes/ssl/worker-key.pem --kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml"
[root@c7-k8s02 ssl]# vi /etc/kubernetes/proxy
KUBE_PROXY_ARGS="--kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml"
重新啟動相關的 services:
[root@c7-k8s02 ssl]# systemctl restart docker kubelet kube-proxy
[root@c7-k8s02 ssl]# systemctl status docker kubelet kube-proxy -l
16. 確認其他 node ssl 憑證是否正確配置 :
redis-frontend-controller.yaml
redis-frontend-service.yaml
redis-master-controller.yaml
redis-master-service.yaml
redis-slave-controller.yaml
redis-slave-service.yaml
kubectl create -f redis-xxxxxxxxx.yaml
安裝後,利用瀏覽器連線 http://kube_host_ip:30001 可以看到 Hello World 及輸入視窗,這表示 kubernetes 應該已正常運作了。
Kubernetes 主機系統資源使用情況,可以安裝
kube-ui-rc.yaml
kube-ui-svc.yaml
安裝後,利用瀏覽器連線
http://kube_master_host_ip:8080/api/v1/proxy/namespaces/kube-system/services/kube-ui/#/dashboard/
就可以看到每個 node 的系統使用狀態。
PULL 一個 nginx 來試試,先建立兩個 yaml 格式的設定檔
vi nginx-rc.yaml
內容:
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
labels:
name: nginx
spec:
replicas: 1
selector:
name: nginx
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
vi nginx-svc.yaml
內容:
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
name: nginx
spec:
type: NodePort
ports:
- port: 80
nodePort: 30002 #NodePort 連接埠範圍限制在 30000~32767 Port,超出 Service 建立會失敗
selector:
name: nginx
執行:
kubectl create -f nginx-rc.yaml
kubectl create -f nginx-svc.yaml
觀察 nginx 建立的情況:
[root@c7-k8s01 ~]# kubectl get rc nginx -o wide
NAME DESIRED CURRENT READY AGE CONTAINER(S) IMAGE(S) SELECTOR
nginx 3 3 3 30m nginx nginx name=nginx
[root@c7-k8s01 ~]# kubectl get service nginx -o wide
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
nginx 10.254.147.10 <nodes> 80/TCP 25m name=nginx
增加或減少 RC 數量
kubectl scale rc nginx --replicas=3 #增加 nginx RC 為三個
連線試試
http://192.168.60.153:30002
正常運行 nginx 時,可以看到 歡迎頁面